Test Report: Docker_Linux_crio 22352

                    
                      9a7985111956b2877773a073c576921d0f069a2d:2025-12-28:43023
                    
                

Test fail (26/332)

x
+
TestAddons/serial/Volcano (0.26s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-614829 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-614829 addons disable volcano --alsologtostderr -v=1: exit status 11 (259.257776ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:29:12.339529   18458 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:29:12.339869   18458 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:29:12.339881   18458 out.go:374] Setting ErrFile to fd 2...
	I1228 06:29:12.339885   18458 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:29:12.340170   18458 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:29:12.340520   18458 mustload.go:66] Loading cluster: addons-614829
	I1228 06:29:12.340870   18458 config.go:182] Loaded profile config "addons-614829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:29:12.340893   18458 addons.go:622] checking whether the cluster is paused
	I1228 06:29:12.340997   18458 config.go:182] Loaded profile config "addons-614829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:29:12.341017   18458 host.go:66] Checking if "addons-614829" exists ...
	I1228 06:29:12.341430   18458 cli_runner.go:164] Run: docker container inspect addons-614829 --format={{.State.Status}}
	I1228 06:29:12.359421   18458 ssh_runner.go:195] Run: systemctl --version
	I1228 06:29:12.359465   18458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-614829
	I1228 06:29:12.376572   18458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/addons-614829/id_rsa Username:docker}
	I1228 06:29:12.466386   18458 ssh_runner.go:195] Run: sudo crio config
	I1228 06:29:12.514617   18458 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:29:12.528503   18458 out.go:203] 
	W1228 06:29:12.529662   18458 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:29:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:29:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1228 06:29:12.529683   18458 out.go:285] * 
	* 
	W1228 06:29:12.530483   18458 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1228 06:29:12.531604   18458 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-614829 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.26s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 3.136333ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-788cd7d5bc-px79j" [057ee256-50a5-44da-80e4-78a317bee4be] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002645114s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-zbdsv" [d6a7e2bf-609a-4ca0-9a28-393dba8a4156] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003803859s
addons_test.go:394: (dbg) Run:  kubectl --context addons-614829 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-614829 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-614829 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.308484708s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-614829 ip
2025/12/28 06:29:33 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-614829 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-614829 addons disable registry --alsologtostderr -v=1: exit status 11 (274.937129ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:29:33.897643   21403 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:29:33.898000   21403 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:29:33.898013   21403 out.go:374] Setting ErrFile to fd 2...
	I1228 06:29:33.898020   21403 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:29:33.898272   21403 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:29:33.898701   21403 mustload.go:66] Loading cluster: addons-614829
	I1228 06:29:33.899170   21403 config.go:182] Loaded profile config "addons-614829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:29:33.899199   21403 addons.go:622] checking whether the cluster is paused
	I1228 06:29:33.899331   21403 config.go:182] Loaded profile config "addons-614829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:29:33.899345   21403 host.go:66] Checking if "addons-614829" exists ...
	I1228 06:29:33.899757   21403 cli_runner.go:164] Run: docker container inspect addons-614829 --format={{.State.Status}}
	I1228 06:29:33.917485   21403 ssh_runner.go:195] Run: systemctl --version
	I1228 06:29:33.917551   21403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-614829
	I1228 06:29:33.934975   21403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/addons-614829/id_rsa Username:docker}
	I1228 06:29:34.027069   21403 ssh_runner.go:195] Run: sudo crio config
	I1228 06:29:34.092983   21403 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:29:34.108971   21403 out.go:203] 
	W1228 06:29:34.111097   21403 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:29:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:29:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1228 06:29:34.111114   21403 out.go:285] * 
	* 
	W1228 06:29:34.111847   21403 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1228 06:29:34.113983   21403 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-614829 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (13.80s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.44s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 2.79135ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-614829
addons_test.go:334: (dbg) Run:  kubectl --context addons-614829 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-614829 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-614829 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (273.834457ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:29:36.365159   21688 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:29:36.365430   21688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:29:36.365439   21688 out.go:374] Setting ErrFile to fd 2...
	I1228 06:29:36.365443   21688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:29:36.365626   21688 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:29:36.365863   21688 mustload.go:66] Loading cluster: addons-614829
	I1228 06:29:36.366239   21688 config.go:182] Loaded profile config "addons-614829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:29:36.366261   21688 addons.go:622] checking whether the cluster is paused
	I1228 06:29:36.366349   21688 config.go:182] Loaded profile config "addons-614829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:29:36.366360   21688 host.go:66] Checking if "addons-614829" exists ...
	I1228 06:29:36.366715   21688 cli_runner.go:164] Run: docker container inspect addons-614829 --format={{.State.Status}}
	I1228 06:29:36.387108   21688 ssh_runner.go:195] Run: systemctl --version
	I1228 06:29:36.387176   21688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-614829
	I1228 06:29:36.407369   21688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/addons-614829/id_rsa Username:docker}
	I1228 06:29:36.501301   21688 ssh_runner.go:195] Run: sudo crio config
	I1228 06:29:36.563221   21688 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:29:36.577948   21688 out.go:203] 
	W1228 06:29:36.579223   21688 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:29:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:29:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1228 06:29:36.579248   21688 out.go:285] * 
	* 
	W1228 06:29:36.579921   21688 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1228 06:29:36.581939   21688 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-614829 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.44s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (8.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-614829 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-614829 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-614829 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [a26a6959-e6b2-443c-89f1-e370d73e056a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [a26a6959-e6b2-443c-89f1-e370d73e056a] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 7.003784222s
I1228 06:29:27.785230    9076 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-614829 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-614829 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-614829 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-614829 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-614829 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (257.216706ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:29:28.592905   20455 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:29:28.593199   20455 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:29:28.593208   20455 out.go:374] Setting ErrFile to fd 2...
	I1228 06:29:28.593212   20455 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:29:28.593382   20455 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:29:28.593625   20455 mustload.go:66] Loading cluster: addons-614829
	I1228 06:29:28.593943   20455 config.go:182] Loaded profile config "addons-614829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:29:28.593960   20455 addons.go:622] checking whether the cluster is paused
	I1228 06:29:28.594069   20455 config.go:182] Loaded profile config "addons-614829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:29:28.594092   20455 host.go:66] Checking if "addons-614829" exists ...
	I1228 06:29:28.594438   20455 cli_runner.go:164] Run: docker container inspect addons-614829 --format={{.State.Status}}
	I1228 06:29:28.612699   20455 ssh_runner.go:195] Run: systemctl --version
	I1228 06:29:28.612785   20455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-614829
	I1228 06:29:28.630920   20455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/addons-614829/id_rsa Username:docker}
	I1228 06:29:28.720359   20455 ssh_runner.go:195] Run: sudo crio config
	I1228 06:29:28.769503   20455 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:29:28.785219   20455 out.go:203] 
	W1228 06:29:28.786625   20455 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:29:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:29:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1228 06:29:28.786649   20455 out.go:285] * 
	* 
	W1228 06:29:28.787661   20455 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1228 06:29:28.789024   20455 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-614829 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-614829 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-614829 addons disable ingress --alsologtostderr -v=1: exit status 11 (270.133981ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:29:28.854943   20521 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:29:28.855292   20521 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:29:28.855307   20521 out.go:374] Setting ErrFile to fd 2...
	I1228 06:29:28.855314   20521 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:29:28.855647   20521 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:29:28.855994   20521 mustload.go:66] Loading cluster: addons-614829
	I1228 06:29:28.856451   20521 config.go:182] Loaded profile config "addons-614829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:29:28.856482   20521 addons.go:622] checking whether the cluster is paused
	I1228 06:29:28.856602   20521 config.go:182] Loaded profile config "addons-614829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:29:28.856624   20521 host.go:66] Checking if "addons-614829" exists ...
	I1228 06:29:28.857120   20521 cli_runner.go:164] Run: docker container inspect addons-614829 --format={{.State.Status}}
	I1228 06:29:28.877724   20521 ssh_runner.go:195] Run: systemctl --version
	I1228 06:29:28.877782   20521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-614829
	I1228 06:29:28.896004   20521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/addons-614829/id_rsa Username:docker}
	I1228 06:29:28.991462   20521 ssh_runner.go:195] Run: sudo crio config
	I1228 06:29:29.044427   20521 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:29:29.058152   20521 out.go:203] 
	W1228 06:29:29.059178   20521 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:29:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:29:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1228 06:29:29.059198   20521 out.go:285] * 
	* 
	W1228 06:29:29.060137   20521 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1228 06:29:29.061290   20521 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-614829 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (8.74s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-dzhfb" [9fd9addf-5af5-4c66-9d7c-2ef577514ec2] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003350928s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-614829 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-614829 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (265.639507ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:29:38.696001   22000 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:29:38.696285   22000 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:29:38.696295   22000 out.go:374] Setting ErrFile to fd 2...
	I1228 06:29:38.696300   22000 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:29:38.696473   22000 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:29:38.696748   22000 mustload.go:66] Loading cluster: addons-614829
	I1228 06:29:38.697089   22000 config.go:182] Loaded profile config "addons-614829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:29:38.697110   22000 addons.go:622] checking whether the cluster is paused
	I1228 06:29:38.697200   22000 config.go:182] Loaded profile config "addons-614829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:29:38.697221   22000 host.go:66] Checking if "addons-614829" exists ...
	I1228 06:29:38.697599   22000 cli_runner.go:164] Run: docker container inspect addons-614829 --format={{.State.Status}}
	I1228 06:29:38.718253   22000 ssh_runner.go:195] Run: systemctl --version
	I1228 06:29:38.718328   22000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-614829
	I1228 06:29:38.737379   22000 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/addons-614829/id_rsa Username:docker}
	I1228 06:29:38.828344   22000 ssh_runner.go:195] Run: sudo crio config
	I1228 06:29:38.885419   22000 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:29:38.899122   22000 out.go:203] 
	W1228 06:29:38.900308   22000 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:29:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:29:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1228 06:29:38.900326   22000 out.go:285] * 
	* 
	W1228 06:29:38.901302   22000 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1228 06:29:38.902502   22000 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-614829 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.31s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 2.917102ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-5778bb4788-g62lj" [95f8a6e6-5d05-490d-8e85-8fc9c8e923ee] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00249189s
addons_test.go:465: (dbg) Run:  kubectl --context addons-614829 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-614829 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-614829 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (249.233606ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:29:25.437129   20107 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:29:25.437400   20107 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:29:25.437409   20107 out.go:374] Setting ErrFile to fd 2...
	I1228 06:29:25.437413   20107 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:29:25.437599   20107 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:29:25.437839   20107 mustload.go:66] Loading cluster: addons-614829
	I1228 06:29:25.438178   20107 config.go:182] Loaded profile config "addons-614829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:29:25.438200   20107 addons.go:622] checking whether the cluster is paused
	I1228 06:29:25.438285   20107 config.go:182] Loaded profile config "addons-614829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:29:25.438298   20107 host.go:66] Checking if "addons-614829" exists ...
	I1228 06:29:25.438670   20107 cli_runner.go:164] Run: docker container inspect addons-614829 --format={{.State.Status}}
	I1228 06:29:25.456240   20107 ssh_runner.go:195] Run: systemctl --version
	I1228 06:29:25.456288   20107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-614829
	I1228 06:29:25.474331   20107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/addons-614829/id_rsa Username:docker}
	I1228 06:29:25.563659   20107 ssh_runner.go:195] Run: sudo crio config
	I1228 06:29:25.612816   20107 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:29:25.627069   20107 out.go:203] 
	W1228 06:29:25.628447   20107 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:29:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:29:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1228 06:29:25.628478   20107 out.go:285] * 
	* 
	W1228 06:29:25.629209   20107 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1228 06:29:25.630391   20107 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-614829 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.31s)

                                                
                                    
x
+
TestAddons/parallel/CSI (43.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1228 06:29:34.120615    9076 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1228 06:29:34.123886    9076 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1228 06:29:34.123914    9076 kapi.go:107] duration metric: took 3.304677ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 3.31703ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-614829 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-614829 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-614829 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-614829 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-614829 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-614829 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-614829 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-614829 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-614829 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-614829 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-614829 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-614829 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-614829 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-614829 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-614829 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-614829 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-614829 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-614829 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-614829 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-614829 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-614829 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-614829 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-614829 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [8cb78000-6cf0-4c00-82f9-381c29435ccc] Pending
helpers_test.go:353: "task-pv-pod" [8cb78000-6cf0-4c00-82f9-381c29435ccc] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [8cb78000-6cf0-4c00-82f9-381c29435ccc] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003592423s
addons_test.go:574: (dbg) Run:  kubectl --context addons-614829 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-614829 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-614829 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-614829 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-614829 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-614829 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-614829 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-614829 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-614829 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-614829 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-614829 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-614829 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-614829 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-614829 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [f06268ad-97eb-489d-a5f6-dcc2f5a8b608] Pending
helpers_test.go:353: "task-pv-pod-restore" [f06268ad-97eb-489d-a5f6-dcc2f5a8b608] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.002948867s
addons_test.go:616: (dbg) Run:  kubectl --context addons-614829 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-614829 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-614829 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-614829 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-614829 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (256.658807ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:30:17.223472   23106 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:30:17.223779   23106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:30:17.223789   23106 out.go:374] Setting ErrFile to fd 2...
	I1228 06:30:17.223794   23106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:30:17.224044   23106 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:30:17.224324   23106 mustload.go:66] Loading cluster: addons-614829
	I1228 06:30:17.224628   23106 config.go:182] Loaded profile config "addons-614829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:30:17.224645   23106 addons.go:622] checking whether the cluster is paused
	I1228 06:30:17.224722   23106 config.go:182] Loaded profile config "addons-614829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:30:17.224734   23106 host.go:66] Checking if "addons-614829" exists ...
	I1228 06:30:17.225110   23106 cli_runner.go:164] Run: docker container inspect addons-614829 --format={{.State.Status}}
	I1228 06:30:17.244068   23106 ssh_runner.go:195] Run: systemctl --version
	I1228 06:30:17.244135   23106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-614829
	I1228 06:30:17.262375   23106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/addons-614829/id_rsa Username:docker}
	I1228 06:30:17.352517   23106 ssh_runner.go:195] Run: sudo crio config
	I1228 06:30:17.403606   23106 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:30:17.417524   23106 out.go:203] 
	W1228 06:30:17.418753   23106 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:30:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:30:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1228 06:30:17.418770   23106 out.go:285] * 
	* 
	W1228 06:30:17.419522   23106 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1228 06:30:17.420512   23106 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-614829 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-614829 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-614829 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (262.799225ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:30:17.486593   23170 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:30:17.486926   23170 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:30:17.486938   23170 out.go:374] Setting ErrFile to fd 2...
	I1228 06:30:17.486946   23170 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:30:17.487201   23170 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:30:17.487543   23170 mustload.go:66] Loading cluster: addons-614829
	I1228 06:30:17.487945   23170 config.go:182] Loaded profile config "addons-614829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:30:17.487969   23170 addons.go:622] checking whether the cluster is paused
	I1228 06:30:17.488094   23170 config.go:182] Loaded profile config "addons-614829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:30:17.488122   23170 host.go:66] Checking if "addons-614829" exists ...
	I1228 06:30:17.488530   23170 cli_runner.go:164] Run: docker container inspect addons-614829 --format={{.State.Status}}
	I1228 06:30:17.506556   23170 ssh_runner.go:195] Run: systemctl --version
	I1228 06:30:17.506622   23170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-614829
	I1228 06:30:17.524265   23170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/addons-614829/id_rsa Username:docker}
	I1228 06:30:17.614764   23170 ssh_runner.go:195] Run: sudo crio config
	I1228 06:30:17.666364   23170 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:30:17.680623   23170 out.go:203] 
	W1228 06:30:17.681976   23170 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:30:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:30:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1228 06:30:17.681991   23170 out.go:285] * 
	* 
	W1228 06:30:17.682720   23170 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1228 06:30:17.683941   23170 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-614829 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (43.57s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-614829 --alsologtostderr -v=1
addons_test.go:810: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-614829 --alsologtostderr -v=1: exit status 11 (265.609279ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:29:20.380845   18801 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:29:20.381041   18801 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:29:20.381053   18801 out.go:374] Setting ErrFile to fd 2...
	I1228 06:29:20.381060   18801 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:29:20.381346   18801 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:29:20.381636   18801 mustload.go:66] Loading cluster: addons-614829
	I1228 06:29:20.381994   18801 config.go:182] Loaded profile config "addons-614829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:29:20.382013   18801 addons.go:622] checking whether the cluster is paused
	I1228 06:29:20.382128   18801 config.go:182] Loaded profile config "addons-614829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:29:20.382143   18801 host.go:66] Checking if "addons-614829" exists ...
	I1228 06:29:20.382550   18801 cli_runner.go:164] Run: docker container inspect addons-614829 --format={{.State.Status}}
	I1228 06:29:20.401441   18801 ssh_runner.go:195] Run: systemctl --version
	I1228 06:29:20.401503   18801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-614829
	I1228 06:29:20.418681   18801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/addons-614829/id_rsa Username:docker}
	I1228 06:29:20.509318   18801 ssh_runner.go:195] Run: sudo crio config
	I1228 06:29:20.562531   18801 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:29:20.579482   18801 out.go:203] 
	W1228 06:29:20.581048   18801 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:29:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:29:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1228 06:29:20.581084   18801 out.go:285] * 
	* 
	W1228 06:29:20.581935   18801 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1228 06:29:20.583320   18801 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:812: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-614829 --alsologtostderr -v=1": exit status 11
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-614829
helpers_test.go:244: (dbg) docker inspect addons-614829:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4b67718f7d8f037deb721f9aafb6f09501326872934401170bca89841f313bf1",
	        "Created": "2025-12-28T06:28:06.188414105Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 11080,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-28T06:28:06.235504856Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8b8cccb9afb2a57c3d011fcf33e0403b1551aa7036e30b12a395646869801935",
	        "ResolvConfPath": "/var/lib/docker/containers/4b67718f7d8f037deb721f9aafb6f09501326872934401170bca89841f313bf1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4b67718f7d8f037deb721f9aafb6f09501326872934401170bca89841f313bf1/hostname",
	        "HostsPath": "/var/lib/docker/containers/4b67718f7d8f037deb721f9aafb6f09501326872934401170bca89841f313bf1/hosts",
	        "LogPath": "/var/lib/docker/containers/4b67718f7d8f037deb721f9aafb6f09501326872934401170bca89841f313bf1/4b67718f7d8f037deb721f9aafb6f09501326872934401170bca89841f313bf1-json.log",
	        "Name": "/addons-614829",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-614829:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-614829",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4b67718f7d8f037deb721f9aafb6f09501326872934401170bca89841f313bf1",
	                "LowerDir": "/var/lib/docker/overlay2/caf8048bd2a75c89601f1c2db688d5d72bad0d5a6723962578cbfb7aa056300a-init/diff:/var/lib/docker/overlay2/69e554713d6cc3cb33e7ea5f93430536a8ca0db38320574d3719c26f00b2f62c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/caf8048bd2a75c89601f1c2db688d5d72bad0d5a6723962578cbfb7aa056300a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/caf8048bd2a75c89601f1c2db688d5d72bad0d5a6723962578cbfb7aa056300a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/caf8048bd2a75c89601f1c2db688d5d72bad0d5a6723962578cbfb7aa056300a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-614829",
	                "Source": "/var/lib/docker/volumes/addons-614829/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-614829",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-614829",
	                "name.minikube.sigs.k8s.io": "addons-614829",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "8a5bfa81d3470d69642d9b5128d188551ff471d83c41169a5fb121e3bd38be7e",
	            "SandboxKey": "/var/run/docker/netns/8a5bfa81d347",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-614829": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "25d589529b71e05fc4b0a7aa49427cf957d9b9a491d70b61f8fa263a4986d2ff",
	                    "EndpointID": "48d2cc1f9ed545521bb6147184a12cbd40584bd08be8e6f5268d9b8c30125cc9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "12:5a:b3:07:97:db",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-614829",
	                        "4b67718f7d8f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-614829 -n addons-614829
helpers_test.go:253: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-614829 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-614829 logs -n 25: (1.28334322s)
helpers_test.go:261: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-239257 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-239257   │ jenkins │ v1.37.0 │ 28 Dec 25 06:27 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 28 Dec 25 06:27 UTC │ 28 Dec 25 06:27 UTC │
	│ delete  │ -p download-only-239257                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-239257   │ jenkins │ v1.37.0 │ 28 Dec 25 06:27 UTC │ 28 Dec 25 06:27 UTC │
	│ start   │ -o=json --download-only -p download-only-337184 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-337184   │ jenkins │ v1.37.0 │ 28 Dec 25 06:27 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 28 Dec 25 06:27 UTC │ 28 Dec 25 06:27 UTC │
	│ delete  │ -p download-only-337184                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-337184   │ jenkins │ v1.37.0 │ 28 Dec 25 06:27 UTC │ 28 Dec 25 06:27 UTC │
	│ delete  │ -p download-only-239257                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-239257   │ jenkins │ v1.37.0 │ 28 Dec 25 06:27 UTC │ 28 Dec 25 06:27 UTC │
	│ delete  │ -p download-only-337184                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-337184   │ jenkins │ v1.37.0 │ 28 Dec 25 06:27 UTC │ 28 Dec 25 06:27 UTC │
	│ start   │ --download-only -p download-docker-744245 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-744245 │ jenkins │ v1.37.0 │ 28 Dec 25 06:27 UTC │                     │
	│ delete  │ -p download-docker-744245                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-744245 │ jenkins │ v1.37.0 │ 28 Dec 25 06:27 UTC │ 28 Dec 25 06:27 UTC │
	│ start   │ --download-only -p binary-mirror-452060 --alsologtostderr --binary-mirror http://127.0.0.1:37845 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-452060   │ jenkins │ v1.37.0 │ 28 Dec 25 06:27 UTC │                     │
	│ delete  │ -p binary-mirror-452060                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-452060   │ jenkins │ v1.37.0 │ 28 Dec 25 06:27 UTC │ 28 Dec 25 06:27 UTC │
	│ addons  │ disable dashboard -p addons-614829                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-614829          │ jenkins │ v1.37.0 │ 28 Dec 25 06:27 UTC │                     │
	│ addons  │ enable dashboard -p addons-614829                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-614829          │ jenkins │ v1.37.0 │ 28 Dec 25 06:27 UTC │                     │
	│ start   │ -p addons-614829 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-614829          │ jenkins │ v1.37.0 │ 28 Dec 25 06:27 UTC │ 28 Dec 25 06:29 UTC │
	│ addons  │ addons-614829 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-614829          │ jenkins │ v1.37.0 │ 28 Dec 25 06:29 UTC │                     │
	│ addons  │ addons-614829 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-614829          │ jenkins │ v1.37.0 │ 28 Dec 25 06:29 UTC │                     │
	│ addons  │ enable headlamp -p addons-614829 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-614829          │ jenkins │ v1.37.0 │ 28 Dec 25 06:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 06:27:42
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 06:27:42.717886   10425 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:27:42.718152   10425 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:27:42.718162   10425 out.go:374] Setting ErrFile to fd 2...
	I1228 06:27:42.718166   10425 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:27:42.718341   10425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:27:42.718853   10425 out.go:368] Setting JSON to false
	I1228 06:27:42.719594   10425 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":615,"bootTime":1766902648,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1228 06:27:42.719642   10425 start.go:143] virtualization: kvm guest
	I1228 06:27:42.721602   10425 out.go:179] * [addons-614829] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1228 06:27:42.723425   10425 notify.go:221] Checking for updates...
	I1228 06:27:42.723456   10425 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 06:27:42.724935   10425 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 06:27:42.726070   10425 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:27:42.727228   10425 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-5550/.minikube
	I1228 06:27:42.728366   10425 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1228 06:27:42.729471   10425 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 06:27:42.730690   10425 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 06:27:42.753060   10425 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1228 06:27:42.753148   10425 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:27:42.806073   10425 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-28 06:27:42.796741645 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:27:42.806180   10425 docker.go:319] overlay module found
	I1228 06:27:42.808424   10425 out.go:179] * Using the docker driver based on user configuration
	I1228 06:27:42.809566   10425 start.go:309] selected driver: docker
	I1228 06:27:42.809581   10425 start.go:928] validating driver "docker" against <nil>
	I1228 06:27:42.809591   10425 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 06:27:42.810243   10425 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:27:42.863017   10425 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-28 06:27:42.853160928 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:27:42.863260   10425 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1228 06:27:42.863544   10425 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 06:27:42.865224   10425 out.go:179] * Using Docker driver with root privileges
	I1228 06:27:42.866429   10425 cni.go:84] Creating CNI manager for ""
	I1228 06:27:42.866504   10425 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1228 06:27:42.866517   10425 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1228 06:27:42.866582   10425 start.go:353] cluster config:
	{Name:addons-614829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-614829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:27:42.867781   10425 out.go:179] * Starting "addons-614829" primary control-plane node in "addons-614829" cluster
	I1228 06:27:42.868699   10425 cache.go:134] Beginning downloading kic base image for docker with crio
	I1228 06:27:42.869799   10425 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 06:27:42.870878   10425 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1228 06:27:42.870904   10425 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22352-5550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1228 06:27:42.870911   10425 cache.go:65] Caching tarball of preloaded images
	I1228 06:27:42.870965   10425 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 06:27:42.871059   10425 preload.go:251] Found /home/jenkins/minikube-integration/22352-5550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1228 06:27:42.871077   10425 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1228 06:27:42.871430   10425 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/config.json ...
	I1228 06:27:42.871455   10425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/config.json: {Name:mk8a390b61ab64cf392ab735bcc2580418200bbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:27:42.886837   10425 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 to local cache
	I1228 06:27:42.886943   10425 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local cache directory
	I1228 06:27:42.886959   10425 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local cache directory, skipping pull
	I1228 06:27:42.886963   10425 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in cache, skipping pull
	I1228 06:27:42.886973   10425 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 as a tarball
	I1228 06:27:42.886980   10425 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 from local cache
	I1228 06:27:55.704240   10425 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 from cached tarball
	I1228 06:27:55.704281   10425 cache.go:243] Successfully downloaded all kic artifacts
	I1228 06:27:55.704331   10425 start.go:360] acquireMachinesLock for addons-614829: {Name:mk8fcad242e547801f4b1309fcdb9b120aca7c14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:27:55.704436   10425 start.go:364] duration metric: took 83.634µs to acquireMachinesLock for "addons-614829"
	I1228 06:27:55.704468   10425 start.go:93] Provisioning new machine with config: &{Name:addons-614829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-614829 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1228 06:27:55.704547   10425 start.go:125] createHost starting for "" (driver="docker")
	I1228 06:27:55.706415   10425 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1228 06:27:55.706633   10425 start.go:159] libmachine.API.Create for "addons-614829" (driver="docker")
	I1228 06:27:55.706670   10425 client.go:173] LocalClient.Create starting
	I1228 06:27:55.706778   10425 main.go:144] libmachine: Creating CA: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem
	I1228 06:27:55.789000   10425 main.go:144] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem
	I1228 06:27:55.920343   10425 cli_runner.go:164] Run: docker network inspect addons-614829 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1228 06:27:55.938176   10425 cli_runner.go:211] docker network inspect addons-614829 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1228 06:27:55.938245   10425 network_create.go:284] running [docker network inspect addons-614829] to gather additional debugging logs...
	I1228 06:27:55.938263   10425 cli_runner.go:164] Run: docker network inspect addons-614829
	W1228 06:27:55.955111   10425 cli_runner.go:211] docker network inspect addons-614829 returned with exit code 1
	I1228 06:27:55.955145   10425 network_create.go:287] error running [docker network inspect addons-614829]: docker network inspect addons-614829: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-614829 not found
	I1228 06:27:55.955182   10425 network_create.go:289] output of [docker network inspect addons-614829]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-614829 not found
	
	** /stderr **
	I1228 06:27:55.955273   10425 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 06:27:55.971736   10425 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001bd4860}
	I1228 06:27:55.971766   10425 network_create.go:124] attempt to create docker network addons-614829 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1228 06:27:55.971806   10425 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-614829 addons-614829
	I1228 06:27:56.016778   10425 network_create.go:108] docker network addons-614829 192.168.49.0/24 created
	I1228 06:27:56.016806   10425 kic.go:121] calculated static IP "192.168.49.2" for the "addons-614829" container
	I1228 06:27:56.016858   10425 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1228 06:27:56.032549   10425 cli_runner.go:164] Run: docker volume create addons-614829 --label name.minikube.sigs.k8s.io=addons-614829 --label created_by.minikube.sigs.k8s.io=true
	I1228 06:27:56.051268   10425 oci.go:103] Successfully created a docker volume addons-614829
	I1228 06:27:56.051333   10425 cli_runner.go:164] Run: docker run --rm --name addons-614829-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-614829 --entrypoint /usr/bin/test -v addons-614829:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -d /var/lib
	I1228 06:28:02.402252   10425 cli_runner.go:217] Completed: docker run --rm --name addons-614829-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-614829 --entrypoint /usr/bin/test -v addons-614829:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -d /var/lib: (6.350877476s)
	I1228 06:28:02.402287   10425 oci.go:107] Successfully prepared a docker volume addons-614829
	I1228 06:28:02.402326   10425 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1228 06:28:02.402338   10425 kic.go:194] Starting extracting preloaded images to volume ...
	I1228 06:28:02.402400   10425 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22352-5550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-614829:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1228 06:28:06.119151   10425 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22352-5550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-614829:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.716703439s)
	I1228 06:28:06.119178   10425 kic.go:203] duration metric: took 3.71683855s to extract preloaded images to volume ...
	W1228 06:28:06.119253   10425 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1228 06:28:06.119284   10425 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1228 06:28:06.119318   10425 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1228 06:28:06.171531   10425 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-614829 --name addons-614829 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-614829 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-614829 --network addons-614829 --ip 192.168.49.2 --volume addons-614829:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1
	I1228 06:28:06.471597   10425 cli_runner.go:164] Run: docker container inspect addons-614829 --format={{.State.Running}}
	I1228 06:28:06.490627   10425 cli_runner.go:164] Run: docker container inspect addons-614829 --format={{.State.Status}}
	I1228 06:28:06.510268   10425 cli_runner.go:164] Run: docker exec addons-614829 stat /var/lib/dpkg/alternatives/iptables
	I1228 06:28:06.563115   10425 oci.go:144] the created container "addons-614829" has a running status.
	I1228 06:28:06.563156   10425 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22352-5550/.minikube/machines/addons-614829/id_rsa...
	I1228 06:28:06.740514   10425 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22352-5550/.minikube/machines/addons-614829/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1228 06:28:06.769362   10425 cli_runner.go:164] Run: docker container inspect addons-614829 --format={{.State.Status}}
	I1228 06:28:06.793891   10425 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1228 06:28:06.793923   10425 kic_runner.go:114] Args: [docker exec --privileged addons-614829 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1228 06:28:06.840555   10425 cli_runner.go:164] Run: docker container inspect addons-614829 --format={{.State.Status}}
	I1228 06:28:06.858975   10425 machine.go:94] provisionDockerMachine start ...
	I1228 06:28:06.859097   10425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-614829
	I1228 06:28:06.879918   10425 main.go:144] libmachine: Using SSH client type: native
	I1228 06:28:06.880170   10425 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1228 06:28:06.880201   10425 main.go:144] libmachine: About to run SSH command:
	hostname
	I1228 06:28:07.002707   10425 main.go:144] libmachine: SSH cmd err, output: <nil>: addons-614829
	
	I1228 06:28:07.002735   10425 ubuntu.go:182] provisioning hostname "addons-614829"
	I1228 06:28:07.002789   10425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-614829
	I1228 06:28:07.021175   10425 main.go:144] libmachine: Using SSH client type: native
	I1228 06:28:07.021376   10425 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1228 06:28:07.021389   10425 main.go:144] libmachine: About to run SSH command:
	sudo hostname addons-614829 && echo "addons-614829" | sudo tee /etc/hostname
	I1228 06:28:07.153505   10425 main.go:144] libmachine: SSH cmd err, output: <nil>: addons-614829
	
	I1228 06:28:07.153608   10425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-614829
	I1228 06:28:07.171720   10425 main.go:144] libmachine: Using SSH client type: native
	I1228 06:28:07.171945   10425 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1228 06:28:07.171969   10425 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-614829' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-614829/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-614829' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1228 06:28:07.293453   10425 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1228 06:28:07.293487   10425 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22352-5550/.minikube CaCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22352-5550/.minikube}
	I1228 06:28:07.293528   10425 ubuntu.go:190] setting up certificates
	I1228 06:28:07.293555   10425 provision.go:84] configureAuth start
	I1228 06:28:07.293633   10425 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-614829
	I1228 06:28:07.311824   10425 provision.go:143] copyHostCerts
	I1228 06:28:07.311907   10425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem (1082 bytes)
	I1228 06:28:07.312025   10425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem (1123 bytes)
	I1228 06:28:07.312148   10425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem (1679 bytes)
	I1228 06:28:07.312224   10425 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem org=jenkins.addons-614829 san=[127.0.0.1 192.168.49.2 addons-614829 localhost minikube]
	I1228 06:28:07.388699   10425 provision.go:177] copyRemoteCerts
	I1228 06:28:07.388784   10425 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1228 06:28:07.388837   10425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-614829
	I1228 06:28:07.407227   10425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/addons-614829/id_rsa Username:docker}
	I1228 06:28:07.497054   10425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1228 06:28:07.515793   10425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1228 06:28:07.532502   10425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1228 06:28:07.549252   10425 provision.go:87] duration metric: took 255.667014ms to configureAuth
	I1228 06:28:07.549290   10425 ubuntu.go:206] setting minikube options for container-runtime
	I1228 06:28:07.549486   10425 config.go:182] Loaded profile config "addons-614829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:28:07.549589   10425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-614829
	I1228 06:28:07.567875   10425 main.go:144] libmachine: Using SSH client type: native
	I1228 06:28:07.568101   10425 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1228 06:28:07.568119   10425 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1228 06:28:07.818665   10425 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1228 06:28:07.818695   10425 machine.go:97] duration metric: took 959.678456ms to provisionDockerMachine
	I1228 06:28:07.818709   10425 client.go:176] duration metric: took 12.112030584s to LocalClient.Create
	I1228 06:28:07.818732   10425 start.go:167] duration metric: took 12.112100162s to libmachine.API.Create "addons-614829"
	I1228 06:28:07.818740   10425 start.go:293] postStartSetup for "addons-614829" (driver="docker")
	I1228 06:28:07.818749   10425 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1228 06:28:07.818806   10425 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1228 06:28:07.818847   10425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-614829
	I1228 06:28:07.835776   10425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/addons-614829/id_rsa Username:docker}
	I1228 06:28:07.926309   10425 ssh_runner.go:195] Run: cat /etc/os-release
	I1228 06:28:07.929660   10425 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1228 06:28:07.929683   10425 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1228 06:28:07.929694   10425 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-5550/.minikube/addons for local assets ...
	I1228 06:28:07.929743   10425 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-5550/.minikube/files for local assets ...
	I1228 06:28:07.929765   10425 start.go:296] duration metric: took 111.020414ms for postStartSetup
	I1228 06:28:07.930046   10425 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-614829
	I1228 06:28:07.947443   10425 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/config.json ...
	I1228 06:28:07.947758   10425 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 06:28:07.947807   10425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-614829
	I1228 06:28:07.964389   10425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/addons-614829/id_rsa Username:docker}
	I1228 06:28:08.050468   10425 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1228 06:28:08.054652   10425 start.go:128] duration metric: took 12.350081584s to createHost
	I1228 06:28:08.054674   10425 start.go:83] releasing machines lock for "addons-614829", held for 12.350225174s
	I1228 06:28:08.054736   10425 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-614829
	I1228 06:28:08.071508   10425 ssh_runner.go:195] Run: cat /version.json
	I1228 06:28:08.071562   10425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-614829
	I1228 06:28:08.071563   10425 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1228 06:28:08.071695   10425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-614829
	I1228 06:28:08.089202   10425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/addons-614829/id_rsa Username:docker}
	I1228 06:28:08.089925   10425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/addons-614829/id_rsa Username:docker}
	I1228 06:28:08.175150   10425 ssh_runner.go:195] Run: systemctl --version
	I1228 06:28:08.229503   10425 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1228 06:28:08.261108   10425 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1228 06:28:08.265852   10425 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1228 06:28:08.265922   10425 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1228 06:28:08.289877   10425 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1228 06:28:08.289901   10425 start.go:496] detecting cgroup driver to use...
	I1228 06:28:08.289941   10425 detect.go:190] detected "systemd" cgroup driver on host os
	I1228 06:28:08.289998   10425 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1228 06:28:08.305083   10425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1228 06:28:08.316574   10425 docker.go:218] disabling cri-docker service (if available) ...
	I1228 06:28:08.316629   10425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1228 06:28:08.332091   10425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1228 06:28:08.348981   10425 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1228 06:28:08.428054   10425 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1228 06:28:08.513512   10425 docker.go:234] disabling docker service ...
	I1228 06:28:08.513575   10425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1228 06:28:08.530535   10425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1228 06:28:08.542213   10425 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1228 06:28:08.620604   10425 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1228 06:28:08.699997   10425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1228 06:28:08.711810   10425 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 06:28:08.724862   10425 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1228 06:28:08.724921   10425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:28:08.734275   10425 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1228 06:28:08.734329   10425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:28:08.742533   10425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:28:08.750382   10425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:28:08.758363   10425 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1228 06:28:08.765873   10425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:28:08.774123   10425 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:28:08.786979   10425 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:28:08.795268   10425 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1228 06:28:08.802099   10425 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1228 06:28:08.802153   10425 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1228 06:28:08.813402   10425 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1228 06:28:08.820796   10425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:28:08.897695   10425 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1228 06:28:09.034427   10425 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1228 06:28:09.034505   10425 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1228 06:28:09.038343   10425 start.go:574] Will wait 60s for crictl version
	I1228 06:28:09.038394   10425 ssh_runner.go:195] Run: which crictl
	I1228 06:28:09.041563   10425 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1228 06:28:09.064389   10425 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1228 06:28:09.064486   10425 ssh_runner.go:195] Run: crio --version
	I1228 06:28:09.090482   10425 ssh_runner.go:195] Run: crio --version
	I1228 06:28:09.118501   10425 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1228 06:28:09.119756   10425 cli_runner.go:164] Run: docker network inspect addons-614829 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 06:28:09.136504   10425 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1228 06:28:09.140453   10425 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 06:28:09.150437   10425 kubeadm.go:884] updating cluster {Name:addons-614829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-614829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 06:28:09.150536   10425 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1228 06:28:09.150580   10425 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 06:28:09.182946   10425 crio.go:631] all images are preloaded for cri-o runtime.
	I1228 06:28:09.182967   10425 crio.go:503] Images already preloaded, skipping extraction
	I1228 06:28:09.183010   10425 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 06:28:09.206860   10425 crio.go:631] all images are preloaded for cri-o runtime.
	I1228 06:28:09.206883   10425 cache_images.go:86] Images are preloaded, skipping loading
	I1228 06:28:09.206893   10425 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.35.0 crio true true} ...
	I1228 06:28:09.206985   10425 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-614829 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:addons-614829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 06:28:09.207091   10425 ssh_runner.go:195] Run: crio config
	I1228 06:28:09.249635   10425 cni.go:84] Creating CNI manager for ""
	I1228 06:28:09.249656   10425 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1228 06:28:09.249670   10425 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1228 06:28:09.249695   10425 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-614829 NodeName:addons-614829 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 06:28:09.249817   10425 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-614829"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 06:28:09.249870   10425 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1228 06:28:09.257815   10425 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 06:28:09.257872   10425 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 06:28:09.265494   10425 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1228 06:28:09.277228   10425 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 06:28:09.291625   10425 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1228 06:28:09.303720   10425 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1228 06:28:09.307155   10425 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 06:28:09.316520   10425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:28:09.395501   10425 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:28:09.419152   10425 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829 for IP: 192.168.49.2
	I1228 06:28:09.419174   10425 certs.go:195] generating shared ca certs ...
	I1228 06:28:09.419202   10425 certs.go:227] acquiring lock for ca certs: {Name:mk77ee411d20e2d367f536371cb4debf1ce5f664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:28:09.419327   10425 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-5550/.minikube/ca.key
	I1228 06:28:09.558759   10425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt ...
	I1228 06:28:09.558790   10425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt: {Name:mkdd8701063f501a41d1a0cbe35d6b92f97c5576 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:28:09.558964   10425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-5550/.minikube/ca.key ...
	I1228 06:28:09.558977   10425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/ca.key: {Name:mkdf6b6077a7f41cf03db69787fb6a1e7d43e1da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:28:09.559064   10425 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.key
	I1228 06:28:09.713393   10425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.crt ...
	I1228 06:28:09.713421   10425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.crt: {Name:mk938b272c91ec7f857e9ce86a8f6037b34c563d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:28:09.713572   10425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.key ...
	I1228 06:28:09.713582   10425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.key: {Name:mk08e36c7ea65081e7593d2f2743d046b2498d9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:28:09.713653   10425 certs.go:257] generating profile certs ...
	I1228 06:28:09.713703   10425 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/client.key
	I1228 06:28:09.713719   10425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/client.crt with IP's: []
	I1228 06:28:09.781260   10425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/client.crt ...
	I1228 06:28:09.781289   10425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/client.crt: {Name:mkbec1daa9faca3aa1ffa6543c7a830db1fec7ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:28:09.781455   10425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/client.key ...
	I1228 06:28:09.781466   10425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/client.key: {Name:mk146a75058a32b96f4df392ca4ce19a7d8575c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:28:09.781542   10425 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/apiserver.key.9959b17d
	I1228 06:28:09.781561   10425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/apiserver.crt.9959b17d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1228 06:28:09.847655   10425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/apiserver.crt.9959b17d ...
	I1228 06:28:09.847683   10425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/apiserver.crt.9959b17d: {Name:mk1f56a3b691dc0a10e796bb67a174ba7e47dbea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:28:09.847836   10425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/apiserver.key.9959b17d ...
	I1228 06:28:09.847848   10425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/apiserver.key.9959b17d: {Name:mk31df6338539785a8c794868ec768506549af05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:28:09.847916   10425 certs.go:382] copying /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/apiserver.crt.9959b17d -> /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/apiserver.crt
	I1228 06:28:09.847993   10425 certs.go:386] copying /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/apiserver.key.9959b17d -> /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/apiserver.key
	I1228 06:28:09.848083   10425 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/proxy-client.key
	I1228 06:28:09.848107   10425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/proxy-client.crt with IP's: []
	I1228 06:28:10.005036   10425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/proxy-client.crt ...
	I1228 06:28:10.005066   10425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/proxy-client.crt: {Name:mkb4d57e451c2649d0acde5be333480e7ea24324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:28:10.005222   10425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/proxy-client.key ...
	I1228 06:28:10.005233   10425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/proxy-client.key: {Name:mk86ac15e7c42c2012fabe919fe525461132f7ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:28:10.005401   10425 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem (1675 bytes)
	I1228 06:28:10.005438   10425 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem (1082 bytes)
	I1228 06:28:10.005461   10425 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem (1123 bytes)
	I1228 06:28:10.005484   10425 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem (1679 bytes)
	I1228 06:28:10.006160   10425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 06:28:10.023571   10425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1228 06:28:10.040083   10425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 06:28:10.056583   10425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 06:28:10.072969   10425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1228 06:28:10.089312   10425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1228 06:28:10.105329   10425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 06:28:10.123241   10425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1228 06:28:10.140052   10425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 06:28:10.159721   10425 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 06:28:10.172866   10425 ssh_runner.go:195] Run: openssl version
	I1228 06:28:10.178940   10425 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:28:10.186294   10425 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 06:28:10.196233   10425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:28:10.199915   10425 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:28 /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:28:10.199968   10425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:28:10.234003   10425 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 06:28:10.241603   10425 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1228 06:28:10.248653   10425 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 06:28:10.252060   10425 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1228 06:28:10.252105   10425 kubeadm.go:401] StartCluster: {Name:addons-614829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-614829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:28:10.252210   10425 ssh_runner.go:195] Run: sudo crio config
	I1228 06:28:10.298364   10425 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	W1228 06:28:10.309963   10425 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:28:10Z" level=error msg="open /run/runc: no such file or directory"
	I1228 06:28:10.310063   10425 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 06:28:10.317895   10425 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1228 06:28:10.325470   10425 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1228 06:28:10.325528   10425 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1228 06:28:10.332918   10425 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1228 06:28:10.332941   10425 kubeadm.go:158] found existing configuration files:
	
	I1228 06:28:10.332984   10425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1228 06:28:10.340140   10425 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1228 06:28:10.340195   10425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1228 06:28:10.347086   10425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1228 06:28:10.354487   10425 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1228 06:28:10.354536   10425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1228 06:28:10.361338   10425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1228 06:28:10.368258   10425 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1228 06:28:10.368304   10425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1228 06:28:10.375077   10425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1228 06:28:10.382266   10425 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1228 06:28:10.382321   10425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1228 06:28:10.389051   10425 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1228 06:28:10.481412   10425 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1228 06:28:10.534708   10425 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1228 06:28:17.043983   10425 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1228 06:28:17.044096   10425 kubeadm.go:319] [preflight] Running pre-flight checks
	I1228 06:28:17.044218   10425 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1228 06:28:17.044355   10425 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1228 06:28:17.044410   10425 kubeadm.go:319] OS: Linux
	I1228 06:28:17.044481   10425 kubeadm.go:319] CGROUPS_CPU: enabled
	I1228 06:28:17.044558   10425 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1228 06:28:17.044631   10425 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1228 06:28:17.044700   10425 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1228 06:28:17.044770   10425 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1228 06:28:17.044838   10425 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1228 06:28:17.044930   10425 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1228 06:28:17.045005   10425 kubeadm.go:319] CGROUPS_IO: enabled
	I1228 06:28:17.045142   10425 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1228 06:28:17.045309   10425 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1228 06:28:17.045443   10425 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1228 06:28:17.045552   10425 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1228 06:28:17.047166   10425 out.go:252]   - Generating certificates and keys ...
	I1228 06:28:17.047233   10425 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1228 06:28:17.047292   10425 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1228 06:28:17.047354   10425 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1228 06:28:17.047404   10425 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1228 06:28:17.047475   10425 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1228 06:28:17.047541   10425 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1228 06:28:17.047589   10425 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1228 06:28:17.047692   10425 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-614829 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1228 06:28:17.047739   10425 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1228 06:28:17.047846   10425 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-614829 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1228 06:28:17.047913   10425 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1228 06:28:17.047985   10425 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1228 06:28:17.048066   10425 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1228 06:28:17.048123   10425 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1228 06:28:17.048201   10425 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1228 06:28:17.048290   10425 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1228 06:28:17.048375   10425 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1228 06:28:17.048465   10425 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1228 06:28:17.048524   10425 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1228 06:28:17.048620   10425 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1228 06:28:17.048703   10425 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1228 06:28:17.049973   10425 out.go:252]   - Booting up control plane ...
	I1228 06:28:17.050084   10425 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1228 06:28:17.050197   10425 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1228 06:28:17.050293   10425 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1228 06:28:17.050444   10425 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1228 06:28:17.050598   10425 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1228 06:28:17.050710   10425 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1228 06:28:17.050840   10425 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1228 06:28:17.050903   10425 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1228 06:28:17.051072   10425 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1228 06:28:17.051209   10425 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1228 06:28:17.051304   10425 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.671785ms
	I1228 06:28:17.051413   10425 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1228 06:28:17.051532   10425 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1228 06:28:17.051655   10425 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1228 06:28:17.051770   10425 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1228 06:28:17.051890   10425 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 504.493973ms
	I1228 06:28:17.051990   10425 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.085044598s
	I1228 06:28:17.052110   10425 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501727631s
	I1228 06:28:17.052260   10425 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1228 06:28:17.052432   10425 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1228 06:28:17.052525   10425 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1228 06:28:17.052762   10425 kubeadm.go:319] [mark-control-plane] Marking the node addons-614829 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1228 06:28:17.052852   10425 kubeadm.go:319] [bootstrap-token] Using token: 4t3z3j.thf3nqmjrajc2o55
	I1228 06:28:17.054205   10425 out.go:252]   - Configuring RBAC rules ...
	I1228 06:28:17.054295   10425 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1228 06:28:17.054400   10425 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1228 06:28:17.054531   10425 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1228 06:28:17.054638   10425 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1228 06:28:17.054736   10425 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1228 06:28:17.054814   10425 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1228 06:28:17.054921   10425 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1228 06:28:17.054962   10425 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1228 06:28:17.055002   10425 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1228 06:28:17.055008   10425 kubeadm.go:319] 
	I1228 06:28:17.055104   10425 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1228 06:28:17.055113   10425 kubeadm.go:319] 
	I1228 06:28:17.055213   10425 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1228 06:28:17.055221   10425 kubeadm.go:319] 
	I1228 06:28:17.055248   10425 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1228 06:28:17.055300   10425 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1228 06:28:17.055347   10425 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1228 06:28:17.055353   10425 kubeadm.go:319] 
	I1228 06:28:17.055422   10425 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1228 06:28:17.055433   10425 kubeadm.go:319] 
	I1228 06:28:17.055508   10425 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1228 06:28:17.055518   10425 kubeadm.go:319] 
	I1228 06:28:17.055600   10425 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1228 06:28:17.055668   10425 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1228 06:28:17.055730   10425 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1228 06:28:17.055736   10425 kubeadm.go:319] 
	I1228 06:28:17.055812   10425 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1228 06:28:17.055883   10425 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1228 06:28:17.055890   10425 kubeadm.go:319] 
	I1228 06:28:17.055970   10425 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 4t3z3j.thf3nqmjrajc2o55 \
	I1228 06:28:17.056092   10425 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6534497fd09654e1c9f62bf7a6763f446292593a08619861d4eab5a65759d2d4 \
	I1228 06:28:17.056127   10425 kubeadm.go:319] 	--control-plane 
	I1228 06:28:17.056135   10425 kubeadm.go:319] 
	I1228 06:28:17.056269   10425 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1228 06:28:17.056278   10425 kubeadm.go:319] 
	I1228 06:28:17.056345   10425 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 4t3z3j.thf3nqmjrajc2o55 \
	I1228 06:28:17.056452   10425 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6534497fd09654e1c9f62bf7a6763f446292593a08619861d4eab5a65759d2d4 
	I1228 06:28:17.056468   10425 cni.go:84] Creating CNI manager for ""
	I1228 06:28:17.056477   10425 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1228 06:28:17.057853   10425 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1228 06:28:17.058870   10425 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1228 06:28:17.062849   10425 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1228 06:28:17.062864   10425 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1228 06:28:17.075397   10425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1228 06:28:17.267429   10425 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1228 06:28:17.267496   10425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:28:17.267523   10425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-614829 minikube.k8s.io/updated_at=2025_12_28T06_28_17_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba minikube.k8s.io/name=addons-614829 minikube.k8s.io/primary=true
	I1228 06:28:17.276249   10425 ops.go:34] apiserver oom_adj: -16
	I1228 06:28:17.354912   10425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:28:17.855396   10425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:28:18.355679   10425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:28:18.855663   10425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:28:19.355500   10425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:28:19.855539   10425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:28:20.354994   10425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:28:20.855422   10425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:28:21.355178   10425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:28:21.855839   10425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:28:21.918680   10425 kubeadm.go:1114] duration metric: took 4.651244056s to wait for elevateKubeSystemPrivileges
	I1228 06:28:21.918714   10425 kubeadm.go:403] duration metric: took 11.666612708s to StartCluster
	I1228 06:28:21.918730   10425 settings.go:142] acquiring lock: {Name:mk84c1d4c127eaf11c7cc5cc16de86f86f2dcbe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:28:21.918849   10425 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:28:21.919250   10425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/kubeconfig: {Name:mke0b45a06b272ee5aa493b62cdbe6f2a53c0aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:28:21.919441   10425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1228 06:28:21.919460   10425 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1228 06:28:21.919525   10425 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1228 06:28:21.919662   10425 addons.go:70] Setting yakd=true in profile "addons-614829"
	I1228 06:28:21.919687   10425 addons.go:239] Setting addon yakd=true in "addons-614829"
	I1228 06:28:21.919699   10425 addons.go:70] Setting inspektor-gadget=true in profile "addons-614829"
	I1228 06:28:21.919720   10425 host.go:66] Checking if "addons-614829" exists ...
	I1228 06:28:21.919726   10425 addons.go:239] Setting addon inspektor-gadget=true in "addons-614829"
	I1228 06:28:21.919724   10425 addons.go:70] Setting default-storageclass=true in profile "addons-614829"
	I1228 06:28:21.919744   10425 addons.go:70] Setting cloud-spanner=true in profile "addons-614829"
	I1228 06:28:21.919781   10425 addons.go:70] Setting gcp-auth=true in profile "addons-614829"
	I1228 06:28:21.919793   10425 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-614829"
	I1228 06:28:21.919793   10425 addons.go:239] Setting addon cloud-spanner=true in "addons-614829"
	I1228 06:28:21.919805   10425 mustload.go:66] Loading cluster: addons-614829
	I1228 06:28:21.919815   10425 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-614829"
	I1228 06:28:21.919826   10425 addons.go:70] Setting storage-provisioner=true in profile "addons-614829"
	I1228 06:28:21.919818   10425 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-614829"
	I1228 06:28:21.919782   10425 addons.go:70] Setting ingress-dns=true in profile "addons-614829"
	I1228 06:28:21.919843   10425 addons.go:239] Setting addon storage-provisioner=true in "addons-614829"
	I1228 06:28:21.919845   10425 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-614829"
	I1228 06:28:21.919897   10425 addons.go:70] Setting volcano=true in profile "addons-614829"
	I1228 06:28:21.919914   10425 addons.go:239] Setting addon volcano=true in "addons-614829"
	I1228 06:28:21.919939   10425 host.go:66] Checking if "addons-614829" exists ...
	I1228 06:28:21.919950   10425 addons.go:70] Setting volumesnapshots=true in profile "addons-614829"
	I1228 06:28:21.919967   10425 addons.go:239] Setting addon volumesnapshots=true in "addons-614829"
	I1228 06:28:21.919977   10425 config.go:182] Loaded profile config "addons-614829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:28:21.919982   10425 addons.go:70] Setting registry=true in profile "addons-614829"
	I1228 06:28:21.919999   10425 addons.go:239] Setting addon registry=true in "addons-614829"
	I1228 06:28:21.920002   10425 host.go:66] Checking if "addons-614829" exists ...
	I1228 06:28:21.920023   10425 host.go:66] Checking if "addons-614829" exists ...
	I1228 06:28:21.920220   10425 cli_runner.go:164] Run: docker container inspect addons-614829 --format={{.State.Status}}
	I1228 06:28:21.920251   10425 cli_runner.go:164] Run: docker container inspect addons-614829 --format={{.State.Status}}
	I1228 06:28:21.920308   10425 cli_runner.go:164] Run: docker container inspect addons-614829 --format={{.State.Status}}
	I1228 06:28:21.920433   10425 cli_runner.go:164] Run: docker container inspect addons-614829 --format={{.State.Status}}
	I1228 06:28:21.920454   10425 cli_runner.go:164] Run: docker container inspect addons-614829 --format={{.State.Status}}
	I1228 06:28:21.920472   10425 cli_runner.go:164] Run: docker container inspect addons-614829 --format={{.State.Status}}
	I1228 06:28:21.919848   10425 addons.go:239] Setting addon ingress-dns=true in "addons-614829"
	I1228 06:28:21.920605   10425 host.go:66] Checking if "addons-614829" exists ...
	I1228 06:28:21.919757   10425 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-614829"
	I1228 06:28:21.920781   10425 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-614829"
	I1228 06:28:21.920803   10425 host.go:66] Checking if "addons-614829" exists ...
	I1228 06:28:21.919831   10425 host.go:66] Checking if "addons-614829" exists ...
	I1228 06:28:21.921187   10425 cli_runner.go:164] Run: docker container inspect addons-614829 --format={{.State.Status}}
	I1228 06:28:21.919762   10425 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-614829"
	I1228 06:28:21.919807   10425 addons.go:70] Setting metrics-server=true in profile "addons-614829"
	I1228 06:28:21.921736   10425 addons.go:239] Setting addon metrics-server=true in "addons-614829"
	I1228 06:28:21.921774   10425 host.go:66] Checking if "addons-614829" exists ...
	I1228 06:28:21.921785   10425 cli_runner.go:164] Run: docker container inspect addons-614829 --format={{.State.Status}}
	I1228 06:28:21.921861   10425 out.go:179] * Verifying Kubernetes components...
	I1228 06:28:21.920631   10425 host.go:66] Checking if "addons-614829" exists ...
	I1228 06:28:21.919774   10425 addons.go:70] Setting ingress=true in profile "addons-614829"
	I1228 06:28:21.922119   10425 addons.go:239] Setting addon ingress=true in "addons-614829"
	I1228 06:28:21.922151   10425 host.go:66] Checking if "addons-614829" exists ...
	I1228 06:28:21.919765   10425 host.go:66] Checking if "addons-614829" exists ...
	I1228 06:28:21.919832   10425 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-614829"
	I1228 06:28:21.919736   10425 config.go:182] Loaded profile config "addons-614829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:28:21.922487   10425 host.go:66] Checking if "addons-614829" exists ...
	I1228 06:28:21.919798   10425 addons.go:70] Setting registry-creds=true in profile "addons-614829"
	I1228 06:28:21.922660   10425 addons.go:239] Setting addon registry-creds=true in "addons-614829"
	I1228 06:28:21.922707   10425 host.go:66] Checking if "addons-614829" exists ...
	I1228 06:28:21.919807   10425 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-614829"
	I1228 06:28:21.922839   10425 host.go:66] Checking if "addons-614829" exists ...
	I1228 06:28:21.923427   10425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:28:21.933183   10425 cli_runner.go:164] Run: docker container inspect addons-614829 --format={{.State.Status}}
	I1228 06:28:21.933319   10425 cli_runner.go:164] Run: docker container inspect addons-614829 --format={{.State.Status}}
	I1228 06:28:21.933528   10425 cli_runner.go:164] Run: docker container inspect addons-614829 --format={{.State.Status}}
	I1228 06:28:21.933538   10425 cli_runner.go:164] Run: docker container inspect addons-614829 --format={{.State.Status}}
	I1228 06:28:21.933903   10425 cli_runner.go:164] Run: docker container inspect addons-614829 --format={{.State.Status}}
	I1228 06:28:21.934302   10425 cli_runner.go:164] Run: docker container inspect addons-614829 --format={{.State.Status}}
	I1228 06:28:21.934609   10425 cli_runner.go:164] Run: docker container inspect addons-614829 --format={{.State.Status}}
	I1228 06:28:21.950652   10425 cli_runner.go:164] Run: docker container inspect addons-614829 --format={{.State.Status}}
	I1228 06:28:21.951399   10425 cli_runner.go:164] Run: docker container inspect addons-614829 --format={{.State.Status}}
	I1228 06:28:21.962126   10425 host.go:66] Checking if "addons-614829" exists ...
	I1228 06:28:21.964554   10425 out.go:179]   - Using image ghcr.io/manusa/yakd:0.0.7
	I1228 06:28:21.968406   10425 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1228 06:28:21.968427   10425 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1228 06:28:21.968482   10425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-614829
	I1228 06:28:21.974353   10425 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1228 06:28:21.975576   10425 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1228 06:28:21.975602   10425 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1228 06:28:21.975716   10425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-614829
	W1228 06:28:21.991293   10425 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1228 06:28:21.993725   10425 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1228 06:28:21.994965   10425 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1228 06:28:21.994985   10425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1228 06:28:21.995065   10425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-614829
	I1228 06:28:22.026611   10425 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1228 06:28:22.029634   10425 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1228 06:28:22.029945   10425 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1228 06:28:22.030004   10425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1228 06:28:22.030005   10425 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-614829"
	I1228 06:28:22.030063   10425 host.go:66] Checking if "addons-614829" exists ...
	I1228 06:28:22.030087   10425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-614829
	I1228 06:28:22.030491   10425 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.46
	I1228 06:28:22.030544   10425 cli_runner.go:164] Run: docker container inspect addons-614829 --format={{.State.Status}}
	I1228 06:28:22.030901   10425 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1228 06:28:22.030913   10425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1228 06:28:22.031222   10425 addons.go:239] Setting addon default-storageclass=true in "addons-614829"
	I1228 06:28:22.031258   10425 host.go:66] Checking if "addons-614829" exists ...
	I1228 06:28:22.031591   10425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-614829
	I1228 06:28:22.031704   10425 cli_runner.go:164] Run: docker container inspect addons-614829 --format={{.State.Status}}
	I1228 06:28:22.032944   10425 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1228 06:28:22.032960   10425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1228 06:28:22.033077   10425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-614829
	I1228 06:28:22.033751   10425 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.1
	I1228 06:28:22.034920   10425 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1228 06:28:22.034957   10425 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1228 06:28:22.035024   10425 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1228 06:28:22.035205   10425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1228 06:28:22.035253   10425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-614829
	I1228 06:28:22.036832   10425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/addons-614829/id_rsa Username:docker}
	I1228 06:28:22.037554   10425 out.go:179]   - Using image docker.io/registry:3.0.0
	I1228 06:28:22.037739   10425 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:28:22.037751   10425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1228 06:28:22.037795   10425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-614829
	I1228 06:28:22.038547   10425 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1228 06:28:22.038834   10425 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1228 06:28:22.038976   10425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1228 06:28:22.039152   10425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-614829
	I1228 06:28:22.046734   10425 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1228 06:28:22.049449   10425 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1228 06:28:22.051923   10425 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1228 06:28:22.052320   10425 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1228 06:28:22.052336   10425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1228 06:28:22.052398   10425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-614829
	I1228 06:28:22.054249   10425 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1228 06:28:22.055697   10425 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1228 06:28:22.057227   10425 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1228 06:28:22.057630   10425 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1228 06:28:22.057646   10425 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1228 06:28:22.057724   10425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-614829
	I1228 06:28:22.059524   10425 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1228 06:28:22.060724   10425 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1228 06:28:22.062022   10425 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1228 06:28:22.064578   10425 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1228 06:28:22.064602   10425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1228 06:28:22.066203   10425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-614829
	I1228 06:28:22.072836   10425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/addons-614829/id_rsa Username:docker}
	I1228 06:28:22.078805   10425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/addons-614829/id_rsa Username:docker}
	I1228 06:28:22.083222   10425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/addons-614829/id_rsa Username:docker}
	I1228 06:28:22.086558   10425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/addons-614829/id_rsa Username:docker}
	I1228 06:28:22.091476   10425 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1228 06:28:22.093515   10425 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1228 06:28:22.094732   10425 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1228 06:28:22.096014   10425 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1228 06:28:22.096069   10425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16257 bytes)
	I1228 06:28:22.096146   10425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-614829
	I1228 06:28:22.099024   10425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/addons-614829/id_rsa Username:docker}
	I1228 06:28:22.109044   10425 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1228 06:28:22.110230   10425 out.go:179]   - Using image docker.io/busybox:stable
	I1228 06:28:22.112283   10425 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1228 06:28:22.112573   10425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1228 06:28:22.112667   10425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-614829
	I1228 06:28:22.117017   10425 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1228 06:28:22.117046   10425 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1228 06:28:22.117102   10425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-614829
	I1228 06:28:22.118928   10425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1228 06:28:22.120895   10425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/addons-614829/id_rsa Username:docker}
	I1228 06:28:22.124195   10425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/addons-614829/id_rsa Username:docker}
	I1228 06:28:22.125726   10425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/addons-614829/id_rsa Username:docker}
	I1228 06:28:22.132949   10425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/addons-614829/id_rsa Username:docker}
	I1228 06:28:22.133495   10425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/addons-614829/id_rsa Username:docker}
	I1228 06:28:22.134117   10425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/addons-614829/id_rsa Username:docker}
	I1228 06:28:22.134391   10425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/addons-614829/id_rsa Username:docker}
	I1228 06:28:22.149550   10425 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:28:22.151821   10425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/addons-614829/id_rsa Username:docker}
	W1228 06:28:22.153822   10425 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1228 06:28:22.153862   10425 retry.go:84] will retry after 200ms: ssh: handshake failed: EOF
	I1228 06:28:22.158017   10425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/addons-614829/id_rsa Username:docker}
	W1228 06:28:22.159098   10425 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1228 06:28:22.215224   10425 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1228 06:28:22.215248   10425 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1228 06:28:22.242327   10425 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1228 06:28:22.242355   10425 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1228 06:28:22.252583   10425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:28:22.258726   10425 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1228 06:28:22.258749   10425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1228 06:28:22.260156   10425 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1228 06:28:22.260178   10425 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1228 06:28:22.260631   10425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1228 06:28:22.273475   10425 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1228 06:28:22.273501   10425 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1228 06:28:22.273664   10425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1228 06:28:22.279013   10425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1228 06:28:22.281087   10425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1228 06:28:22.296059   10425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1228 06:28:22.299221   10425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1228 06:28:22.299637   10425 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1228 06:28:22.299660   10425 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1228 06:28:22.301076   10425 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1228 06:28:22.301094   10425 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1228 06:28:22.303537   10425 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1228 06:28:22.303552   10425 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1228 06:28:22.312471   10425 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1228 06:28:22.312553   10425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1228 06:28:22.321848   10425 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1228 06:28:22.321871   10425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2013 bytes)
	I1228 06:28:22.330332   10425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1228 06:28:22.355817   10425 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1228 06:28:22.355845   10425 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1228 06:28:22.384279   10425 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1228 06:28:22.384307   10425 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1228 06:28:22.387545   10425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1228 06:28:22.387747   10425 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1228 06:28:22.387768   10425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1228 06:28:22.392927   10425 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1228 06:28:22.392947   10425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1228 06:28:22.417939   10425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1228 06:28:22.429170   10425 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1228 06:28:22.429196   10425 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1228 06:28:22.442739   10425 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1228 06:28:22.442777   10425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1228 06:28:22.474268   10425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1228 06:28:22.484351   10425 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1228 06:28:22.484383   10425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1228 06:28:22.507168   10425 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1228 06:28:22.507192   10425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1228 06:28:22.541745   10425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1228 06:28:22.548591   10425 start.go:987] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1228 06:28:22.551131   10425 node_ready.go:35] waiting up to 6m0s for node "addons-614829" to be "Ready" ...
	I1228 06:28:22.557635   10425 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1228 06:28:22.557661   10425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1228 06:28:22.564242   10425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1228 06:28:22.595662   10425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1228 06:28:22.625282   10425 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1228 06:28:22.625312   10425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1228 06:28:22.674072   10425 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1228 06:28:22.674099   10425 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1228 06:28:22.711147   10425 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1228 06:28:22.711172   10425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1228 06:28:22.770462   10425 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1228 06:28:22.770486   10425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1228 06:28:22.850149   10425 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1228 06:28:22.850198   10425 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1228 06:28:22.944687   10425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1228 06:28:23.066476   10425 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-614829" context rescaled to 1 replicas
	I1228 06:28:23.293473   10425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.040846756s)
	I1228 06:28:23.293549   10425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.032898325s)
	I1228 06:28:23.293610   10425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.019932512s)
	I1228 06:28:23.293661   10425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.014592285s)
	I1228 06:28:23.743368   10425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.462246883s)
	I1228 06:28:23.743415   10425 addons.go:495] Verifying addon ingress=true in "addons-614829"
	I1228 06:28:23.743419   10425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.44417196s)
	I1228 06:28:23.743517   10425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.413161736s)
	I1228 06:28:23.743594   10425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.356022605s)
	I1228 06:28:23.743373   10425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.447279763s)
	I1228 06:28:23.743761   10425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.325785926s)
	I1228 06:28:23.744020   10425 addons.go:495] Verifying addon metrics-server=true in "addons-614829"
	I1228 06:28:23.743800   10425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.269504091s)
	I1228 06:28:23.744139   10425 addons.go:495] Verifying addon registry=true in "addons-614829"
	I1228 06:28:23.744287   10425 out.go:179] * Verifying ingress addon...
	I1228 06:28:23.745130   10425 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-614829 service yakd-dashboard -n yakd-dashboard
	
	I1228 06:28:23.745942   10425 out.go:179] * Verifying registry addon...
	I1228 06:28:23.746715   10425 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1228 06:28:23.749194   10425 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1228 06:28:23.752053   10425 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1228 06:28:23.752071   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:23.752579   10425 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1228 06:28:23.752597   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:24.104296   10425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.562500554s)
	W1228 06:28:24.104352   10425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1228 06:28:24.104421   10425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.540149288s)
	I1228 06:28:24.104692   10425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.508995777s)
	I1228 06:28:24.105232   10425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.160317429s)
	I1228 06:28:24.105290   10425 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-614829"
	I1228 06:28:24.107890   10425 out.go:179] * Verifying csi-hostpath-driver addon...
	I1228 06:28:24.110601   10425 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1228 06:28:24.115040   10425 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1228 06:28:24.115059   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1228 06:28:24.117458   10425 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class csi-hostpath-sc as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "csi-hostpath-sc": the object has been modified; please apply your changes to the latest version and try again]
	I1228 06:28:24.237239   10425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1228 06:28:24.250709   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:24.251974   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1228 06:28:24.554240   10425 node_ready.go:57] node "addons-614829" has "Ready":"False" status (will retry)
	I1228 06:28:24.614385   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:24.750472   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:24.751473   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:25.112682   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:25.250269   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:25.251571   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:25.613821   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:25.749781   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:25.751512   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:26.114444   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:26.250021   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:26.251566   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:26.613607   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:26.686154   10425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.448866643s)
	I1228 06:28:26.750000   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:26.751365   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1228 06:28:27.053908   10425 node_ready.go:57] node "addons-614829" has "Ready":"False" status (will retry)
	I1228 06:28:27.113787   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:27.250289   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:27.251369   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:27.614692   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:27.749807   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:27.751493   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:28.114510   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:28.250223   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:28.251279   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:28.614447   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:28.749890   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:28.751441   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1228 06:28:29.054320   10425 node_ready.go:57] node "addons-614829" has "Ready":"False" status (will retry)
	I1228 06:28:29.113996   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:29.250158   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:29.251370   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:29.571837   10425 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1228 06:28:29.571896   10425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-614829
	I1228 06:28:29.590999   10425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/addons-614829/id_rsa Username:docker}
	I1228 06:28:29.613514   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:29.686356   10425 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1228 06:28:29.698737   10425 addons.go:239] Setting addon gcp-auth=true in "addons-614829"
	I1228 06:28:29.698789   10425 host.go:66] Checking if "addons-614829" exists ...
	I1228 06:28:29.699318   10425 cli_runner.go:164] Run: docker container inspect addons-614829 --format={{.State.Status}}
	I1228 06:28:29.717306   10425 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1228 06:28:29.717360   10425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-614829
	I1228 06:28:29.733185   10425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/addons-614829/id_rsa Username:docker}
	I1228 06:28:29.749870   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:29.751404   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:29.822546   10425 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1228 06:28:29.823836   10425 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1228 06:28:29.825061   10425 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1228 06:28:29.825083   10425 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1228 06:28:29.837789   10425 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1228 06:28:29.837818   10425 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1228 06:28:29.850234   10425 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1228 06:28:29.850254   10425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1228 06:28:29.862316   10425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1228 06:28:30.113450   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:30.160024   10425 addons.go:495] Verifying addon gcp-auth=true in "addons-614829"
	I1228 06:28:30.161225   10425 out.go:179] * Verifying gcp-auth addon...
	I1228 06:28:30.162859   10425 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1228 06:28:30.214186   10425 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1228 06:28:30.214208   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:30.249877   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:30.251289   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:30.613534   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:30.665892   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:30.749341   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:30.751136   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:31.113911   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:31.166471   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:31.250520   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:31.251662   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1228 06:28:31.554298   10425 node_ready.go:57] node "addons-614829" has "Ready":"False" status (will retry)
	I1228 06:28:31.614084   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:31.665468   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:31.750130   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:31.751610   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:32.113565   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:32.165748   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:32.249363   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:32.251232   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:32.613852   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:32.666143   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:32.749716   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:32.751208   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:33.113051   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:33.165315   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:33.249955   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:33.251551   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1228 06:28:33.554545   10425 node_ready.go:57] node "addons-614829" has "Ready":"False" status (will retry)
	I1228 06:28:33.614441   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:33.665807   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:33.749126   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:33.751952   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:34.114551   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:34.165703   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:34.250420   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:34.251971   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:34.614168   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:34.665374   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:34.749754   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:34.751194   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:35.136298   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:35.168894   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:35.249168   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:35.251455   10425 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1228 06:28:35.251473   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:35.554486   10425 node_ready.go:49] node "addons-614829" is "Ready"
	I1228 06:28:35.554526   10425 node_ready.go:38] duration metric: took 13.003361803s for node "addons-614829" to be "Ready" ...
	I1228 06:28:35.554542   10425 api_server.go:52] waiting for apiserver process to appear ...
	I1228 06:28:35.554603   10425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 06:28:35.578381   10425 api_server.go:72] duration metric: took 13.658883882s to wait for apiserver process to appear ...
	I1228 06:28:35.578494   10425 api_server.go:88] waiting for apiserver healthz status ...
	I1228 06:28:35.578541   10425 api_server.go:299] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1228 06:28:35.583628   10425 api_server.go:325] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1228 06:28:35.584625   10425 api_server.go:141] control plane version: v1.35.0
	I1228 06:28:35.584651   10425 api_server.go:131] duration metric: took 6.143943ms to wait for apiserver health ...
	I1228 06:28:35.584662   10425 system_pods.go:43] waiting for kube-system pods to appear ...
	I1228 06:28:35.590062   10425 system_pods.go:59] 20 kube-system pods found
	I1228 06:28:35.590102   10425 system_pods.go:61] "amd-gpu-device-plugin-8bks6" [b3223ec7-1e81-4305-9361-cde1ebc1cf93] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1228 06:28:35.590112   10425 system_pods.go:61] "coredns-7d764666f9-hqcjk" [51266ad0-a2c5-4503-9277-16134736b5d1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:28:35.590125   10425 system_pods.go:61] "csi-hostpath-attacher-0" [39bae211-809d-4dc1-b93e-91b91338c170] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1228 06:28:35.590135   10425 system_pods.go:61] "csi-hostpath-resizer-0" [91e58e30-a359-4cc6-a669-14d748d92a39] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1228 06:28:35.590150   10425 system_pods.go:61] "csi-hostpathplugin-mjjzs" [8973dcfc-dbee-4adf-a53c-909add2ef0a2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1228 06:28:35.590157   10425 system_pods.go:61] "etcd-addons-614829" [3d45bd6d-473a-4067-add5-d309e0e881ba] Running
	I1228 06:28:35.590165   10425 system_pods.go:61] "kindnet-2w6qk" [205f9d2d-7a18-45a3-a902-b08c2fbebf19] Running
	I1228 06:28:35.590170   10425 system_pods.go:61] "kube-apiserver-addons-614829" [c3c66e3e-18fe-4f17-a49d-237f6b225861] Running
	I1228 06:28:35.590178   10425 system_pods.go:61] "kube-controller-manager-addons-614829" [0fe6bda6-f99c-4b5e-8e91-3b9fee5fee2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 06:28:35.590185   10425 system_pods.go:61] "kube-ingress-dns-minikube" [d73fcfe6-6dc3-492f-8c17-33eacbef8253] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1228 06:28:35.590191   10425 system_pods.go:61] "kube-proxy-qll2z" [5bfc9e46-af15-4002-ae04-c013398c7877] Running
	I1228 06:28:35.590196   10425 system_pods.go:61] "kube-scheduler-addons-614829" [19b67d3f-19e0-4106-b94e-b9aa27196663] Running
	I1228 06:28:35.590220   10425 system_pods.go:61] "metrics-server-5778bb4788-g62lj" [95f8a6e6-5d05-490d-8e85-8fc9c8e923ee] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1228 06:28:35.590229   10425 system_pods.go:61] "nvidia-device-plugin-daemonset-hshng" [3c12654c-1724-4691-ab9d-82822bb409b5] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1228 06:28:35.590237   10425 system_pods.go:61] "registry-788cd7d5bc-px79j" [057ee256-50a5-44da-80e4-78a317bee4be] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1228 06:28:35.590245   10425 system_pods.go:61] "registry-creds-567fb78d95-dbpdf" [41a54e80-c251-47e4-af2f-c2b9230e01f4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1228 06:28:35.590255   10425 system_pods.go:61] "registry-proxy-zbdsv" [d6a7e2bf-609a-4ca0-9a28-393dba8a4156] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1228 06:28:35.590263   10425 system_pods.go:61] "snapshot-controller-6588d87457-7p9p7" [38a428c5-5e76-4637-a987-e0a1bc5d719d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1228 06:28:35.590274   10425 system_pods.go:61] "snapshot-controller-6588d87457-rgwjp" [6ae00218-05fe-4e0b-97fd-2aebb62e8b87] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1228 06:28:35.590282   10425 system_pods.go:61] "storage-provisioner" [b51abd46-cf77-4651-b9d8-37b184ed587f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:28:35.590290   10425 system_pods.go:74] duration metric: took 5.621414ms to wait for pod list to return data ...
	I1228 06:28:35.590300   10425 default_sa.go:34] waiting for default service account to be created ...
	I1228 06:28:35.594614   10425 default_sa.go:45] found service account: "default"
	I1228 06:28:35.594645   10425 default_sa.go:55] duration metric: took 4.338977ms for default service account to be created ...
	I1228 06:28:35.594657   10425 system_pods.go:116] waiting for k8s-apps to be running ...
	I1228 06:28:35.635813   10425 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1228 06:28:35.635840   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:35.638913   10425 system_pods.go:86] 20 kube-system pods found
	I1228 06:28:35.638954   10425 system_pods.go:89] "amd-gpu-device-plugin-8bks6" [b3223ec7-1e81-4305-9361-cde1ebc1cf93] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1228 06:28:35.638962   10425 system_pods.go:89] "coredns-7d764666f9-hqcjk" [51266ad0-a2c5-4503-9277-16134736b5d1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:28:35.638970   10425 system_pods.go:89] "csi-hostpath-attacher-0" [39bae211-809d-4dc1-b93e-91b91338c170] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1228 06:28:35.638978   10425 system_pods.go:89] "csi-hostpath-resizer-0" [91e58e30-a359-4cc6-a669-14d748d92a39] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1228 06:28:35.638985   10425 system_pods.go:89] "csi-hostpathplugin-mjjzs" [8973dcfc-dbee-4adf-a53c-909add2ef0a2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1228 06:28:35.638989   10425 system_pods.go:89] "etcd-addons-614829" [3d45bd6d-473a-4067-add5-d309e0e881ba] Running
	I1228 06:28:35.638994   10425 system_pods.go:89] "kindnet-2w6qk" [205f9d2d-7a18-45a3-a902-b08c2fbebf19] Running
	I1228 06:28:35.638998   10425 system_pods.go:89] "kube-apiserver-addons-614829" [c3c66e3e-18fe-4f17-a49d-237f6b225861] Running
	I1228 06:28:35.639002   10425 system_pods.go:89] "kube-controller-manager-addons-614829" [0fe6bda6-f99c-4b5e-8e91-3b9fee5fee2d] Running
	I1228 06:28:35.639009   10425 system_pods.go:89] "kube-ingress-dns-minikube" [d73fcfe6-6dc3-492f-8c17-33eacbef8253] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1228 06:28:35.639013   10425 system_pods.go:89] "kube-proxy-qll2z" [5bfc9e46-af15-4002-ae04-c013398c7877] Running
	I1228 06:28:35.639017   10425 system_pods.go:89] "kube-scheduler-addons-614829" [19b67d3f-19e0-4106-b94e-b9aa27196663] Running
	I1228 06:28:35.639022   10425 system_pods.go:89] "metrics-server-5778bb4788-g62lj" [95f8a6e6-5d05-490d-8e85-8fc9c8e923ee] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1228 06:28:35.639042   10425 system_pods.go:89] "nvidia-device-plugin-daemonset-hshng" [3c12654c-1724-4691-ab9d-82822bb409b5] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1228 06:28:35.639051   10425 system_pods.go:89] "registry-788cd7d5bc-px79j" [057ee256-50a5-44da-80e4-78a317bee4be] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1228 06:28:35.639062   10425 system_pods.go:89] "registry-creds-567fb78d95-dbpdf" [41a54e80-c251-47e4-af2f-c2b9230e01f4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1228 06:28:35.639070   10425 system_pods.go:89] "registry-proxy-zbdsv" [d6a7e2bf-609a-4ca0-9a28-393dba8a4156] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1228 06:28:35.639081   10425 system_pods.go:89] "snapshot-controller-6588d87457-7p9p7" [38a428c5-5e76-4637-a987-e0a1bc5d719d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1228 06:28:35.639090   10425 system_pods.go:89] "snapshot-controller-6588d87457-rgwjp" [6ae00218-05fe-4e0b-97fd-2aebb62e8b87] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1228 06:28:35.639098   10425 system_pods.go:89] "storage-provisioner" [b51abd46-cf77-4651-b9d8-37b184ed587f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:28:35.639142   10425 retry.go:84] will retry after 200ms: missing components: kube-dns
	I1228 06:28:35.688218   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:35.750770   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:35.752113   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:35.880706   10425 system_pods.go:86] 20 kube-system pods found
	I1228 06:28:35.880747   10425 system_pods.go:89] "amd-gpu-device-plugin-8bks6" [b3223ec7-1e81-4305-9361-cde1ebc1cf93] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1228 06:28:35.880760   10425 system_pods.go:89] "coredns-7d764666f9-hqcjk" [51266ad0-a2c5-4503-9277-16134736b5d1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:28:35.880772   10425 system_pods.go:89] "csi-hostpath-attacher-0" [39bae211-809d-4dc1-b93e-91b91338c170] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1228 06:28:35.880780   10425 system_pods.go:89] "csi-hostpath-resizer-0" [91e58e30-a359-4cc6-a669-14d748d92a39] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1228 06:28:35.880788   10425 system_pods.go:89] "csi-hostpathplugin-mjjzs" [8973dcfc-dbee-4adf-a53c-909add2ef0a2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1228 06:28:35.880802   10425 system_pods.go:89] "etcd-addons-614829" [3d45bd6d-473a-4067-add5-d309e0e881ba] Running
	I1228 06:28:35.880808   10425 system_pods.go:89] "kindnet-2w6qk" [205f9d2d-7a18-45a3-a902-b08c2fbebf19] Running
	I1228 06:28:35.880814   10425 system_pods.go:89] "kube-apiserver-addons-614829" [c3c66e3e-18fe-4f17-a49d-237f6b225861] Running
	I1228 06:28:35.880819   10425 system_pods.go:89] "kube-controller-manager-addons-614829" [0fe6bda6-f99c-4b5e-8e91-3b9fee5fee2d] Running
	I1228 06:28:35.880829   10425 system_pods.go:89] "kube-ingress-dns-minikube" [d73fcfe6-6dc3-492f-8c17-33eacbef8253] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1228 06:28:35.880834   10425 system_pods.go:89] "kube-proxy-qll2z" [5bfc9e46-af15-4002-ae04-c013398c7877] Running
	I1228 06:28:35.880840   10425 system_pods.go:89] "kube-scheduler-addons-614829" [19b67d3f-19e0-4106-b94e-b9aa27196663] Running
	I1228 06:28:35.880848   10425 system_pods.go:89] "metrics-server-5778bb4788-g62lj" [95f8a6e6-5d05-490d-8e85-8fc9c8e923ee] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1228 06:28:35.880857   10425 system_pods.go:89] "nvidia-device-plugin-daemonset-hshng" [3c12654c-1724-4691-ab9d-82822bb409b5] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1228 06:28:35.880865   10425 system_pods.go:89] "registry-788cd7d5bc-px79j" [057ee256-50a5-44da-80e4-78a317bee4be] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1228 06:28:35.880873   10425 system_pods.go:89] "registry-creds-567fb78d95-dbpdf" [41a54e80-c251-47e4-af2f-c2b9230e01f4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1228 06:28:35.880880   10425 system_pods.go:89] "registry-proxy-zbdsv" [d6a7e2bf-609a-4ca0-9a28-393dba8a4156] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1228 06:28:35.880888   10425 system_pods.go:89] "snapshot-controller-6588d87457-7p9p7" [38a428c5-5e76-4637-a987-e0a1bc5d719d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1228 06:28:35.880897   10425 system_pods.go:89] "snapshot-controller-6588d87457-rgwjp" [6ae00218-05fe-4e0b-97fd-2aebb62e8b87] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1228 06:28:35.880904   10425 system_pods.go:89] "storage-provisioner" [b51abd46-cf77-4651-b9d8-37b184ed587f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:28:36.116182   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:36.122714   10425 system_pods.go:86] 20 kube-system pods found
	I1228 06:28:36.122761   10425 system_pods.go:89] "amd-gpu-device-plugin-8bks6" [b3223ec7-1e81-4305-9361-cde1ebc1cf93] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1228 06:28:36.122772   10425 system_pods.go:89] "coredns-7d764666f9-hqcjk" [51266ad0-a2c5-4503-9277-16134736b5d1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:28:36.122857   10425 system_pods.go:89] "csi-hostpath-attacher-0" [39bae211-809d-4dc1-b93e-91b91338c170] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1228 06:28:36.122868   10425 system_pods.go:89] "csi-hostpath-resizer-0" [91e58e30-a359-4cc6-a669-14d748d92a39] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1228 06:28:36.122878   10425 system_pods.go:89] "csi-hostpathplugin-mjjzs" [8973dcfc-dbee-4adf-a53c-909add2ef0a2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1228 06:28:36.122886   10425 system_pods.go:89] "etcd-addons-614829" [3d45bd6d-473a-4067-add5-d309e0e881ba] Running
	I1228 06:28:36.122893   10425 system_pods.go:89] "kindnet-2w6qk" [205f9d2d-7a18-45a3-a902-b08c2fbebf19] Running
	I1228 06:28:36.122900   10425 system_pods.go:89] "kube-apiserver-addons-614829" [c3c66e3e-18fe-4f17-a49d-237f6b225861] Running
	I1228 06:28:36.122906   10425 system_pods.go:89] "kube-controller-manager-addons-614829" [0fe6bda6-f99c-4b5e-8e91-3b9fee5fee2d] Running
	I1228 06:28:36.122925   10425 system_pods.go:89] "kube-ingress-dns-minikube" [d73fcfe6-6dc3-492f-8c17-33eacbef8253] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1228 06:28:36.122930   10425 system_pods.go:89] "kube-proxy-qll2z" [5bfc9e46-af15-4002-ae04-c013398c7877] Running
	I1228 06:28:36.122936   10425 system_pods.go:89] "kube-scheduler-addons-614829" [19b67d3f-19e0-4106-b94e-b9aa27196663] Running
	I1228 06:28:36.122949   10425 system_pods.go:89] "metrics-server-5778bb4788-g62lj" [95f8a6e6-5d05-490d-8e85-8fc9c8e923ee] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1228 06:28:36.122959   10425 system_pods.go:89] "nvidia-device-plugin-daemonset-hshng" [3c12654c-1724-4691-ab9d-82822bb409b5] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1228 06:28:36.122968   10425 system_pods.go:89] "registry-788cd7d5bc-px79j" [057ee256-50a5-44da-80e4-78a317bee4be] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1228 06:28:36.122978   10425 system_pods.go:89] "registry-creds-567fb78d95-dbpdf" [41a54e80-c251-47e4-af2f-c2b9230e01f4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1228 06:28:36.122987   10425 system_pods.go:89] "registry-proxy-zbdsv" [d6a7e2bf-609a-4ca0-9a28-393dba8a4156] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1228 06:28:36.122999   10425 system_pods.go:89] "snapshot-controller-6588d87457-7p9p7" [38a428c5-5e76-4637-a987-e0a1bc5d719d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1228 06:28:36.123008   10425 system_pods.go:89] "snapshot-controller-6588d87457-rgwjp" [6ae00218-05fe-4e0b-97fd-2aebb62e8b87] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1228 06:28:36.123017   10425 system_pods.go:89] "storage-provisioner" [b51abd46-cf77-4651-b9d8-37b184ed587f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:28:36.166911   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:36.252704   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:36.252804   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:36.514926   10425 system_pods.go:86] 20 kube-system pods found
	I1228 06:28:36.514972   10425 system_pods.go:89] "amd-gpu-device-plugin-8bks6" [b3223ec7-1e81-4305-9361-cde1ebc1cf93] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1228 06:28:36.514982   10425 system_pods.go:89] "coredns-7d764666f9-hqcjk" [51266ad0-a2c5-4503-9277-16134736b5d1] Running
	I1228 06:28:36.514993   10425 system_pods.go:89] "csi-hostpath-attacher-0" [39bae211-809d-4dc1-b93e-91b91338c170] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1228 06:28:36.515004   10425 system_pods.go:89] "csi-hostpath-resizer-0" [91e58e30-a359-4cc6-a669-14d748d92a39] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1228 06:28:36.515021   10425 system_pods.go:89] "csi-hostpathplugin-mjjzs" [8973dcfc-dbee-4adf-a53c-909add2ef0a2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1228 06:28:36.515038   10425 system_pods.go:89] "etcd-addons-614829" [3d45bd6d-473a-4067-add5-d309e0e881ba] Running
	I1228 06:28:36.515044   10425 system_pods.go:89] "kindnet-2w6qk" [205f9d2d-7a18-45a3-a902-b08c2fbebf19] Running
	I1228 06:28:36.515051   10425 system_pods.go:89] "kube-apiserver-addons-614829" [c3c66e3e-18fe-4f17-a49d-237f6b225861] Running
	I1228 06:28:36.515057   10425 system_pods.go:89] "kube-controller-manager-addons-614829" [0fe6bda6-f99c-4b5e-8e91-3b9fee5fee2d] Running
	I1228 06:28:36.515066   10425 system_pods.go:89] "kube-ingress-dns-minikube" [d73fcfe6-6dc3-492f-8c17-33eacbef8253] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1228 06:28:36.515071   10425 system_pods.go:89] "kube-proxy-qll2z" [5bfc9e46-af15-4002-ae04-c013398c7877] Running
	I1228 06:28:36.515078   10425 system_pods.go:89] "kube-scheduler-addons-614829" [19b67d3f-19e0-4106-b94e-b9aa27196663] Running
	I1228 06:28:36.515086   10425 system_pods.go:89] "metrics-server-5778bb4788-g62lj" [95f8a6e6-5d05-490d-8e85-8fc9c8e923ee] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1228 06:28:36.515095   10425 system_pods.go:89] "nvidia-device-plugin-daemonset-hshng" [3c12654c-1724-4691-ab9d-82822bb409b5] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1228 06:28:36.515107   10425 system_pods.go:89] "registry-788cd7d5bc-px79j" [057ee256-50a5-44da-80e4-78a317bee4be] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1228 06:28:36.515115   10425 system_pods.go:89] "registry-creds-567fb78d95-dbpdf" [41a54e80-c251-47e4-af2f-c2b9230e01f4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1228 06:28:36.515123   10425 system_pods.go:89] "registry-proxy-zbdsv" [d6a7e2bf-609a-4ca0-9a28-393dba8a4156] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1228 06:28:36.515130   10425 system_pods.go:89] "snapshot-controller-6588d87457-7p9p7" [38a428c5-5e76-4637-a987-e0a1bc5d719d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1228 06:28:36.515140   10425 system_pods.go:89] "snapshot-controller-6588d87457-rgwjp" [6ae00218-05fe-4e0b-97fd-2aebb62e8b87] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1228 06:28:36.515145   10425 system_pods.go:89] "storage-provisioner" [b51abd46-cf77-4651-b9d8-37b184ed587f] Running
	I1228 06:28:36.515157   10425 system_pods.go:126] duration metric: took 920.492416ms to wait for k8s-apps to be running ...
	I1228 06:28:36.515169   10425 system_svc.go:44] waiting for kubelet service to be running ....
	I1228 06:28:36.515224   10425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:28:36.535181   10425 system_svc.go:56] duration metric: took 20.004557ms WaitForService to wait for kubelet
	I1228 06:28:36.535211   10425 kubeadm.go:587] duration metric: took 14.615721225s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 06:28:36.535234   10425 node_conditions.go:102] verifying NodePressure condition ...
	I1228 06:28:36.538323   10425 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1228 06:28:36.538365   10425 node_conditions.go:123] node cpu capacity is 8
	I1228 06:28:36.538384   10425 node_conditions.go:105] duration metric: took 3.143707ms to run NodePressure ...
	I1228 06:28:36.538399   10425 start.go:242] waiting for startup goroutines ...
	I1228 06:28:36.614514   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:36.666478   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:36.751748   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:36.752480   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:37.114379   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:37.166198   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:37.250135   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:37.251788   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:37.614141   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:37.665685   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:37.749670   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:37.751546   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:38.114117   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:38.166784   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:38.250611   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:38.252208   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:38.614477   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:38.666387   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:38.750455   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:38.752113   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:39.114928   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:39.166498   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:39.250580   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:39.252069   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:39.647059   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:39.666765   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:39.763059   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:39.767301   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:40.115564   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:40.166546   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:40.250719   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:40.253906   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:40.614281   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:40.665439   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:40.750544   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:40.752143   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:41.114489   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:41.214720   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:41.250363   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:41.252392   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:41.616767   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:41.716549   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:41.750894   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:41.751952   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:42.114358   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:42.165695   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:42.250606   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:42.252540   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:42.614541   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:42.666024   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:42.750321   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:42.751619   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:43.114336   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:43.166428   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:43.251294   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:43.252921   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:43.614133   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:43.714652   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:43.750431   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:43.752323   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:44.114759   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:44.166413   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:44.250323   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:44.251895   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:44.614337   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:44.665702   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:44.750309   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:44.751804   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:45.114514   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:45.165946   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:45.249810   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:45.251788   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:45.614098   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:45.665976   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:45.749734   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:45.751730   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:46.113370   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:46.165190   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:46.250086   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:46.251481   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:46.613673   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:46.666711   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:46.750907   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:46.752726   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:47.115484   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:47.166372   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:47.250623   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:47.252054   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:47.614074   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:47.666561   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:47.751094   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:47.752045   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:48.114282   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:48.165565   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:48.250279   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:48.251655   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:48.614122   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:48.665212   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:48.750261   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:48.752139   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:49.114172   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:49.165558   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:49.250481   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:49.350916   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:49.614553   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:49.666343   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:49.750777   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:49.751966   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:50.116574   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:50.165756   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:50.249416   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:50.350062   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:50.614217   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:50.665691   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:50.750803   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:50.751567   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:51.113877   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:51.214736   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:51.250188   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:51.251799   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:51.614686   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:51.666299   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:51.750708   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:51.751868   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:52.114514   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:52.166277   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:52.250331   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:52.251688   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:52.614015   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:52.715004   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:52.749653   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:52.751219   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:53.114422   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:53.166016   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:53.250119   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:53.251924   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:53.614299   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:53.665377   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:53.750514   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:53.751733   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:54.113957   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:54.166255   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:54.250158   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:54.251489   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:54.614735   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:54.665841   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:54.749333   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:54.751563   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:55.115610   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:55.167241   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:55.250301   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:55.251706   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:55.614524   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:55.666257   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:55.750470   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:55.751805   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:56.114047   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:56.218376   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:56.250271   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:56.251558   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:56.614707   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:56.665845   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:56.749675   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:56.751136   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:57.114460   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:57.165673   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:57.250589   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:57.252121   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:57.614666   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:57.666777   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:57.750857   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:57.753070   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:58.114491   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:58.165857   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:58.249923   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:58.251776   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1228 06:28:58.615064   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:58.667477   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:58.750367   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:58.752012   10425 kapi.go:107] duration metric: took 35.002815721s to wait for kubernetes.io/minikube-addons=registry ...
	I1228 06:28:59.114902   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:59.166412   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:59.250330   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:28:59.615262   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:28:59.665831   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:28:59.749784   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:29:00.113822   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:29:00.166317   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:29:00.250153   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:29:00.614313   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:29:00.714023   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:29:00.814891   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:29:01.120569   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:29:01.167497   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:29:01.251363   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:29:01.615557   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:29:01.667100   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:29:01.751695   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:29:02.113973   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:29:02.166065   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:29:02.249964   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:29:02.613822   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:29:02.685789   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:29:02.749538   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:29:03.113490   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:29:03.165614   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:29:03.250987   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:29:03.615790   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:29:03.715506   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:29:03.816382   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:29:04.114409   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:29:04.166124   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:29:04.267460   10425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1228 06:29:04.613492   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:29:04.665689   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:29:04.750309   10425 kapi.go:107] duration metric: took 41.003593129s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1228 06:29:05.115603   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:29:05.166119   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:29:05.663080   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:29:05.665380   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:29:06.159001   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:29:06.165895   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1228 06:29:06.615293   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:29:06.665860   10425 kapi.go:107] duration metric: took 36.502995604s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1228 06:29:06.672540   10425 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-614829 cluster.
	I1228 06:29:06.674096   10425 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1228 06:29:06.675188   10425 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1228 06:29:07.114372   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:29:07.613628   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:29:08.114113   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:29:08.614622   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:29:09.114143   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:29:09.614406   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:29:10.113891   10425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1228 06:29:10.613523   10425 kapi.go:107] duration metric: took 46.5029232s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1228 06:29:10.615221   10425 out.go:179] * Enabled addons: storage-provisioner, amd-gpu-device-plugin, ingress-dns, registry-creds, cloud-spanner, inspektor-gadget, nvidia-device-plugin, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1228 06:29:10.616303   10425 addons.go:530] duration metric: took 48.696785794s for enable addons: enabled=[storage-provisioner amd-gpu-device-plugin ingress-dns registry-creds cloud-spanner inspektor-gadget nvidia-device-plugin metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1228 06:29:10.616344   10425 start.go:247] waiting for cluster config update ...
	I1228 06:29:10.616369   10425 start.go:256] writing updated cluster config ...
	I1228 06:29:10.616620   10425 ssh_runner.go:195] Run: rm -f paused
	I1228 06:29:10.620580   10425 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 06:29:10.622904   10425 pod_ready.go:83] waiting for pod "coredns-7d764666f9-hqcjk" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:29:10.626130   10425 pod_ready.go:94] pod "coredns-7d764666f9-hqcjk" is "Ready"
	I1228 06:29:10.626147   10425 pod_ready.go:86] duration metric: took 3.226074ms for pod "coredns-7d764666f9-hqcjk" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:29:10.627613   10425 pod_ready.go:83] waiting for pod "etcd-addons-614829" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:29:10.630583   10425 pod_ready.go:94] pod "etcd-addons-614829" is "Ready"
	I1228 06:29:10.630601   10425 pod_ready.go:86] duration metric: took 2.969358ms for pod "etcd-addons-614829" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:29:10.632076   10425 pod_ready.go:83] waiting for pod "kube-apiserver-addons-614829" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:29:10.634954   10425 pod_ready.go:94] pod "kube-apiserver-addons-614829" is "Ready"
	I1228 06:29:10.634974   10425 pod_ready.go:86] duration metric: took 2.874012ms for pod "kube-apiserver-addons-614829" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:29:10.636535   10425 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-614829" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:29:11.024298   10425 pod_ready.go:94] pod "kube-controller-manager-addons-614829" is "Ready"
	I1228 06:29:11.024332   10425 pod_ready.go:86] duration metric: took 387.775917ms for pod "kube-controller-manager-addons-614829" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:29:11.223937   10425 pod_ready.go:83] waiting for pod "kube-proxy-qll2z" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:29:11.624641   10425 pod_ready.go:94] pod "kube-proxy-qll2z" is "Ready"
	I1228 06:29:11.624671   10425 pod_ready.go:86] duration metric: took 400.708539ms for pod "kube-proxy-qll2z" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:29:11.824618   10425 pod_ready.go:83] waiting for pod "kube-scheduler-addons-614829" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:29:12.224262   10425 pod_ready.go:94] pod "kube-scheduler-addons-614829" is "Ready"
	I1228 06:29:12.224289   10425 pod_ready.go:86] duration metric: took 399.645418ms for pod "kube-scheduler-addons-614829" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:29:12.224302   10425 pod_ready.go:40] duration metric: took 1.603693539s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 06:29:12.265960   10425 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1228 06:29:12.268827   10425 out.go:179] * Done! kubectl is now configured to use "addons-614829" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 28 06:29:13 addons-614829 crio[770]: time="2025-12-28T06:29:13.12261391Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5c60a38c-21f9-4927-bf30-d6f2d76141ac name=/runtime.v1.ImageService/PullImage
	Dec 28 06:29:13 addons-614829 crio[770]: time="2025-12-28T06:29:13.122952724Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 28 06:29:14 addons-614829 crio[770]: time="2025-12-28T06:29:14.390476907Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=5c60a38c-21f9-4927-bf30-d6f2d76141ac name=/runtime.v1.ImageService/PullImage
	Dec 28 06:29:14 addons-614829 crio[770]: time="2025-12-28T06:29:14.391102959Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=720f9698-69e1-46c7-b3f3-e52c32360274 name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:29:14 addons-614829 crio[770]: time="2025-12-28T06:29:14.392738797Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0ad36747-5e89-438b-bcca-e7e15b09291d name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:29:14 addons-614829 crio[770]: time="2025-12-28T06:29:14.395690628Z" level=info msg="Creating container: default/busybox/busybox" id=5548ec46-94db-4db8-bae2-a84b9b431462 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:29:14 addons-614829 crio[770]: time="2025-12-28T06:29:14.395809292Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:29:14 addons-614829 crio[770]: time="2025-12-28T06:29:14.401546994Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:29:14 addons-614829 crio[770]: time="2025-12-28T06:29:14.402575041Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:29:14 addons-614829 crio[770]: time="2025-12-28T06:29:14.435863794Z" level=info msg="Created container e54520bc76f39f774e7cf15cbb98980a406b5d010a938b638d774a108bb2fdd6: default/busybox/busybox" id=5548ec46-94db-4db8-bae2-a84b9b431462 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:29:14 addons-614829 crio[770]: time="2025-12-28T06:29:14.436420182Z" level=info msg="Starting container: e54520bc76f39f774e7cf15cbb98980a406b5d010a938b638d774a108bb2fdd6" id=c977be6a-8ba4-4009-bc51-3e945657f656 name=/runtime.v1.RuntimeService/StartContainer
	Dec 28 06:29:14 addons-614829 crio[770]: time="2025-12-28T06:29:14.437999581Z" level=info msg="Started container" PID=6464 containerID=e54520bc76f39f774e7cf15cbb98980a406b5d010a938b638d774a108bb2fdd6 description=default/busybox/busybox id=c977be6a-8ba4-4009-bc51-3e945657f656 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3559f0ddeb7684657df2032eac135346afa02f05ba0b62732e4ad0282e65aecb
	Dec 28 06:29:21 addons-614829 crio[770]: time="2025-12-28T06:29:21.031461604Z" level=info msg="Running pod sandbox: default/nginx/POD" id=72b27097-fea4-4ed8-855f-37297cf0e9b8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 28 06:29:21 addons-614829 crio[770]: time="2025-12-28T06:29:21.031545729Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:29:21 addons-614829 crio[770]: time="2025-12-28T06:29:21.038774089Z" level=info msg="Got pod network &{Name:nginx Namespace:default ID:b89cb9f43793a89c8a4c25da4c6adf973774dd1b4eefb6b5fe673fe67291a30c UID:a26a6959-e6b2-443c-89f1-e370d73e056a NetNS:/var/run/netns/8e545331-f940-435f-a91c-072246e36758 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0013f02c0}] Aliases:map[]}"
	Dec 28 06:29:21 addons-614829 crio[770]: time="2025-12-28T06:29:21.038815568Z" level=info msg="Adding pod default_nginx to CNI network \"kindnet\" (type=ptp)"
	Dec 28 06:29:21 addons-614829 crio[770]: time="2025-12-28T06:29:21.059624897Z" level=info msg="Got pod network &{Name:nginx Namespace:default ID:b89cb9f43793a89c8a4c25da4c6adf973774dd1b4eefb6b5fe673fe67291a30c UID:a26a6959-e6b2-443c-89f1-e370d73e056a NetNS:/var/run/netns/8e545331-f940-435f-a91c-072246e36758 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0013f02c0}] Aliases:map[]}"
	Dec 28 06:29:21 addons-614829 crio[770]: time="2025-12-28T06:29:21.059768236Z" level=info msg="Checking pod default_nginx for CNI network kindnet (type=ptp)"
	Dec 28 06:29:21 addons-614829 crio[770]: time="2025-12-28T06:29:21.061409779Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 28 06:29:21 addons-614829 crio[770]: time="2025-12-28T06:29:21.062195729Z" level=info msg="Ran pod sandbox b89cb9f43793a89c8a4c25da4c6adf973774dd1b4eefb6b5fe673fe67291a30c with infra container: default/nginx/POD" id=72b27097-fea4-4ed8-855f-37297cf0e9b8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 28 06:29:21 addons-614829 crio[770]: time="2025-12-28T06:29:21.063439527Z" level=info msg="Checking image status: public.ecr.aws/nginx/nginx:alpine" id=3ec92116-5ea6-4d15-a89c-97dae455f336 name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:29:21 addons-614829 crio[770]: time="2025-12-28T06:29:21.063537395Z" level=info msg="Image public.ecr.aws/nginx/nginx:alpine not found" id=3ec92116-5ea6-4d15-a89c-97dae455f336 name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:29:21 addons-614829 crio[770]: time="2025-12-28T06:29:21.063592689Z" level=info msg="Neither image nor artfiact public.ecr.aws/nginx/nginx:alpine found" id=3ec92116-5ea6-4d15-a89c-97dae455f336 name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:29:21 addons-614829 crio[770]: time="2025-12-28T06:29:21.064432635Z" level=info msg="Pulling image: public.ecr.aws/nginx/nginx:alpine" id=6394e242-7649-485d-878c-aeda94e6e059 name=/runtime.v1.ImageService/PullImage
	Dec 28 06:29:21 addons-614829 crio[770]: time="2025-12-28T06:29:21.064756244Z" level=info msg="Trying to access \"public.ecr.aws/nginx/nginx:alpine\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	e54520bc76f39       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          7 seconds ago        Running             busybox                                  0                   3559f0ddeb768       busybox                                     default
	90b42cfa1afc3       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          12 seconds ago       Running             csi-snapshotter                          0                   3078df5907544       csi-hostpathplugin-mjjzs                    kube-system
	c8a0351baf606       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          12 seconds ago       Running             csi-provisioner                          0                   3078df5907544       csi-hostpathplugin-mjjzs                    kube-system
	091e48c76be9d       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            13 seconds ago       Running             liveness-probe                           0                   3078df5907544       csi-hostpathplugin-mjjzs                    kube-system
	783cf672c7740       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           14 seconds ago       Running             hostpath                                 0                   3078df5907544       csi-hostpathplugin-mjjzs                    kube-system
	a8b13319c0c70       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                15 seconds ago       Running             node-driver-registrar                    0                   3078df5907544       csi-hostpathplugin-mjjzs                    kube-system
	bd2e0937c845a       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 15 seconds ago       Running             gcp-auth                                 0                   95e0d81423d53       gcp-auth-5bbcf684b5-gmxjg                   gcp-auth
	fa64d2677caf2       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad                             17 seconds ago       Running             controller                               0                   f666e94012971       ingress-nginx-controller-7847b5c79c-h9vsn   ingress-nginx
	e4588847a18d5       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   21 seconds ago       Exited              patch                                    1                   e4b402b9d5c63       gcp-auth-certs-patch-9qgmn                  gcp-auth
	dad0492f40217       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   21 seconds ago       Exited              patch                                    1                   1b58e03b67c8e       ingress-nginx-admission-patch-fhxfk         ingress-nginx
	627fce7bc3dcc       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:ea428be7b01d41418fca4d91ae3dff6b037bdc0d42757e7ad392a38536488a1a                            21 seconds ago       Running             gadget                                   0                   f151fb8f271dc       gadget-dzhfb                                gadget
	439e2a78f1df9       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              23 seconds ago       Running             registry-proxy                           0                   db3545c35a87f       registry-proxy-zbdsv                        kube-system
	6f6282244b0e2       nvcr.io/nvidia/k8s-device-plugin@sha256:c3c1a099015d1810c249ba294beaad656ce0354f7e8a77803dacabe60a4f8c9f                                     25 seconds ago       Running             nvidia-device-plugin-ctr                 0                   627128bb834d7       nvidia-device-plugin-daemonset-hshng        kube-system
	ec0a42d9ee3e6       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   28 seconds ago       Running             csi-external-health-monitor-controller   0                   3078df5907544       csi-hostpathplugin-mjjzs                    kube-system
	8fc0fc2e12bd1       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     29 seconds ago       Running             amd-gpu-device-plugin                    0                   9fff36e60e450       amd-gpu-device-plugin-8bks6                 kube-system
	4c9844e69aa84       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              30 seconds ago       Running             csi-resizer                              0                   c2406e95aae93       csi-hostpath-resizer-0                      kube-system
	c78ed51190897       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      30 seconds ago       Running             volume-snapshot-controller               0                   ce5b777a6e731       snapshot-controller-6588d87457-7p9p7        kube-system
	44a50e2e34277       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      31 seconds ago       Running             volume-snapshot-controller               0                   2455e86915e1e       snapshot-controller-6588d87457-rgwjp        kube-system
	328dd756d7731       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   31 seconds ago       Exited              create                                   0                   68d6e060a928f       gcp-auth-certs-create-hwl8b                 gcp-auth
	b76a7bf905b85       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             32 seconds ago       Running             csi-attacher                             0                   2e5690e9836c2       csi-hostpath-attacher-0                     kube-system
	a9ce7193ea024       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   32 seconds ago       Exited              create                                   0                   5be5079fe004e       ingress-nginx-admission-create-ffflb        ingress-nginx
	d8c3b132a1510       gcr.io/cloud-spanner-emulator/emulator@sha256:b948b04b45496ebeb13eee27bc9d238593c142e8e010443892153f181591abde                               33 seconds ago       Running             cloud-spanner-emulator                   0                   b8eed9757c059       cloud-spanner-emulator-5649ccbc87-5jklq     default
	c7a7a59a1612e       ghcr.io/manusa/yakd@sha256:45d2fe163841511e351ae36a5e434fb854a886b0d6a70cea692bd707543fd8c6                                                  37 seconds ago       Running             yakd                                     0                   fed5fdd45a5b8       yakd-dashboard-7bcf5795cd-tqdfs             yakd-dashboard
	9c9dd67738923       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           39 seconds ago       Running             registry                                 0                   58ff5bfa3c6dd       registry-788cd7d5bc-px79j                   kube-system
	bbb53a1d083b2       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             40 seconds ago       Running             local-path-provisioner                   0                   9c73fcacd7bcb       local-path-provisioner-c44bcd496-8km2x      local-path-storage
	1ecdd886a19bc       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        40 seconds ago       Running             metrics-server                           0                   9db0b1e55393e       metrics-server-5778bb4788-g62lj             kube-system
	749d63c7eafbb       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               42 seconds ago       Running             minikube-ingress-dns                     0                   c7b34a70d81bc       kube-ingress-dns-minikube                   kube-system
	070b7f2ced068       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                                                             46 seconds ago       Running             coredns                                  0                   cbe2766ee7854       coredns-7d764666f9-hqcjk                    kube-system
	a4fa0f1ff0938       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             46 seconds ago       Running             storage-provisioner                      0                   fbdd5c0799d47       storage-provisioner                         kube-system
	87527bad1941d       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27                                           57 seconds ago       Running             kindnet-cni                              0                   5d4f0cefb980d       kindnet-2w6qk                               kube-system
	f4b787cad381f       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                                                             59 seconds ago       Running             kube-proxy                               0                   595bad0954fdd       kube-proxy-qll2z                            kube-system
	8f5809ef76473       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                                                             About a minute ago   Running             kube-scheduler                           0                   1720389818d7d       kube-scheduler-addons-614829                kube-system
	25502865bf561       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                                                             About a minute ago   Running             kube-controller-manager                  0                   f0edf874fb0d1       kube-controller-manager-addons-614829       kube-system
	93a1936361e84       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                                                             About a minute ago   Running             kube-apiserver                           0                   9e82f717e1614       kube-apiserver-addons-614829                kube-system
	02f68477154e2       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                                                             About a minute ago   Running             etcd                                     0                   908e5fee5a7be       etcd-addons-614829                          kube-system
	
	
	==> describe nodes <==
	Name:               addons-614829
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-614829
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba
	                    minikube.k8s.io/name=addons-614829
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_28T06_28_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-614829
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-614829"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Dec 2025 06:28:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-614829
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 28 Dec 2025 06:29:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 28 Dec 2025 06:29:17 +0000   Sun, 28 Dec 2025 06:28:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 28 Dec 2025 06:29:17 +0000   Sun, 28 Dec 2025 06:28:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 28 Dec 2025 06:29:17 +0000   Sun, 28 Dec 2025 06:28:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 28 Dec 2025 06:29:17 +0000   Sun, 28 Dec 2025 06:28:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-614829
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 493159aea3d8b8768b108b926950835d
	  System UUID:                83dee020-b02a-4d33-b9ee-bbfeb1f36011
	  Boot ID:                    e7a1d175-ccf2-4135-b9c7-3a9f70f4c4af
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     cloud-spanner-emulator-5649ccbc87-5jklq      0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  gadget                      gadget-dzhfb                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  gcp-auth                    gcp-auth-5bbcf684b5-gmxjg                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  ingress-nginx               ingress-nginx-controller-7847b5c79c-h9vsn    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         59s
	  kube-system                 amd-gpu-device-plugin-8bks6                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kube-system                 coredns-7d764666f9-hqcjk                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     60s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 csi-hostpathplugin-mjjzs                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kube-system                 etcd-addons-614829                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         66s
	  kube-system                 kindnet-2w6qk                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      60s
	  kube-system                 kube-apiserver-addons-614829                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 kube-controller-manager-addons-614829        200m (2%)     0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-proxy-qll2z                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-scheduler-addons-614829                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 metrics-server-5778bb4788-g62lj              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         59s
	  kube-system                 nvidia-device-plugin-daemonset-hshng         0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kube-system                 registry-788cd7d5bc-px79j                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 registry-creds-567fb78d95-dbpdf              0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 registry-proxy-zbdsv                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kube-system                 snapshot-controller-6588d87457-7p9p7         0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 snapshot-controller-6588d87457-rgwjp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  local-path-storage          local-path-provisioner-c44bcd496-8km2x       0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  yakd-dashboard              yakd-dashboard-7bcf5795cd-tqdfs              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  61s   node-controller  Node addons-614829 event: Registered Node addons-614829 in Controller
	
	
	==> dmesg <==
	[Dec28 06:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001811] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000998] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.386099] i8042: Warning: Keylock active
	[  +0.010472] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.485785] block sda: the capability attribute has been deprecated.
	[  +0.082391] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024584] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.071522] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 06:29:22 up 11 min,  0 user,  load average: 1.24, 0.55, 0.20
	Linux addons-614829 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 28 06:29:03 addons-614829 kubelet[1271]: I1228 06:29:03.336454    1271 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abdd5406-444f-4c1d-87b6-05136650efed-kube-api-access-2ng9s" pod "abdd5406-444f-4c1d-87b6-05136650efed" (UID: "abdd5406-444f-4c1d-87b6-05136650efed"). InnerVolumeSpecName "kube-api-access-2ng9s". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 28 06:29:03 addons-614829 kubelet[1271]: I1228 06:29:03.434922    1271 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2ng9s\" (UniqueName: \"kubernetes.io/projected/abdd5406-444f-4c1d-87b6-05136650efed-kube-api-access-2ng9s\") on node \"addons-614829\" DevicePath \"\""
	Dec 28 06:29:03 addons-614829 kubelet[1271]: I1228 06:29:03.504272    1271 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4b402b9d5c6324669248360d7e498560357d8e51232cdf8d34102a4288ca223"
	Dec 28 06:29:03 addons-614829 kubelet[1271]: I1228 06:29:03.506257    1271 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b58e03b67c8eea4c0fb7c5046232540e43c98b4ee580fc7358e52a648079c8e"
	Dec 28 06:29:04 addons-614829 kubelet[1271]: E1228 06:29:04.472741    1271 prober_manager.go:197] "Startup probe already exists for container" pod="gadget/gadget-dzhfb" containerName="gadget"
	Dec 28 06:29:04 addons-614829 kubelet[1271]: E1228 06:29:04.511157    1271 prober_manager.go:209] "Readiness probe already exists for container" pod="ingress-nginx/ingress-nginx-controller-7847b5c79c-h9vsn" containerName="controller"
	Dec 28 06:29:04 addons-614829 kubelet[1271]: I1228 06:29:04.522243    1271 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-7847b5c79c-h9vsn" podStartSLOduration=28.430915469 podStartE2EDuration="41.522226857s" podCreationTimestamp="2025-12-28 06:28:23 +0000 UTC" firstStartedPulling="2025-12-28 06:28:51.068116082 +0000 UTC m=+34.890609079" lastFinishedPulling="2025-12-28 06:29:04.159427466 +0000 UTC m=+47.981920467" observedRunningTime="2025-12-28 06:29:04.521665675 +0000 UTC m=+48.344158679" watchObservedRunningTime="2025-12-28 06:29:04.522226857 +0000 UTC m=+48.344719861"
	Dec 28 06:29:04 addons-614829 kubelet[1271]: E1228 06:29:04.548642    1271 prober_manager.go:197] "Startup probe already exists for container" pod="gadget/gadget-dzhfb" containerName="gadget"
	Dec 28 06:29:05 addons-614829 kubelet[1271]: E1228 06:29:05.514458    1271 prober_manager.go:197] "Startup probe already exists for container" pod="gadget/gadget-dzhfb" containerName="gadget"
	Dec 28 06:29:05 addons-614829 kubelet[1271]: E1228 06:29:05.514576    1271 prober_manager.go:209] "Readiness probe already exists for container" pod="ingress-nginx/ingress-nginx-controller-7847b5c79c-h9vsn" containerName="controller"
	Dec 28 06:29:06 addons-614829 kubelet[1271]: E1228 06:29:06.961215    1271 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Dec 28 06:29:06 addons-614829 kubelet[1271]: E1228 06:29:06.961307    1271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41a54e80-c251-47e4-af2f-c2b9230e01f4-gcr-creds podName:41a54e80-c251-47e4-af2f-c2b9230e01f4 nodeName:}" failed. No retries permitted until 2025-12-28 06:29:38.961283559 +0000 UTC m=+82.783776562 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/41a54e80-c251-47e4-af2f-c2b9230e01f4-gcr-creds") pod "registry-creds-567fb78d95-dbpdf" (UID: "41a54e80-c251-47e4-af2f-c2b9230e01f4") : secret "registry-creds-gcr" not found
	Dec 28 06:29:08 addons-614829 kubelet[1271]: I1228 06:29:08.302466    1271 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Dec 28 06:29:08 addons-614829 kubelet[1271]: I1228 06:29:08.302506    1271 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Dec 28 06:29:10 addons-614829 kubelet[1271]: E1228 06:29:10.546400    1271 prober_manager.go:221] "Liveness probe already exists for container" pod="kube-system/csi-hostpathplugin-mjjzs" containerName="hostpath"
	Dec 28 06:29:10 addons-614829 kubelet[1271]: I1228 06:29:10.559625    1271 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="gcp-auth/gcp-auth-5bbcf684b5-gmxjg" podStartSLOduration=25.390724473 podStartE2EDuration="40.559609215s" podCreationTimestamp="2025-12-28 06:28:30 +0000 UTC" firstStartedPulling="2025-12-28 06:28:51.089369708 +0000 UTC m=+34.911862690" lastFinishedPulling="2025-12-28 06:29:06.258254447 +0000 UTC m=+50.080747432" observedRunningTime="2025-12-28 06:29:06.530750467 +0000 UTC m=+50.353243470" watchObservedRunningTime="2025-12-28 06:29:10.559609215 +0000 UTC m=+54.382102218"
	Dec 28 06:29:10 addons-614829 kubelet[1271]: I1228 06:29:10.560137    1271 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-mjjzs" podStartSLOduration=1.200459136 podStartE2EDuration="35.560129406s" podCreationTimestamp="2025-12-28 06:28:35 +0000 UTC" firstStartedPulling="2025-12-28 06:28:35.577077635 +0000 UTC m=+19.399570634" lastFinishedPulling="2025-12-28 06:29:09.936747921 +0000 UTC m=+53.759240904" observedRunningTime="2025-12-28 06:29:10.558472299 +0000 UTC m=+54.380965299" watchObservedRunningTime="2025-12-28 06:29:10.560129406 +0000 UTC m=+54.382622409"
	Dec 28 06:29:11 addons-614829 kubelet[1271]: E1228 06:29:11.550215    1271 prober_manager.go:221] "Liveness probe already exists for container" pod="kube-system/csi-hostpathplugin-mjjzs" containerName="hostpath"
	Dec 28 06:29:12 addons-614829 kubelet[1271]: I1228 06:29:12.907772    1271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/332c81f3-8d79-40b0-b4ce-5e026e0ac87d-gcp-creds\") pod \"busybox\" (UID: \"332c81f3-8d79-40b0-b4ce-5e026e0ac87d\") " pod="default/busybox"
	Dec 28 06:29:12 addons-614829 kubelet[1271]: I1228 06:29:12.907848    1271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxzqn\" (UniqueName: \"kubernetes.io/projected/332c81f3-8d79-40b0-b4ce-5e026e0ac87d-kube-api-access-bxzqn\") pod \"busybox\" (UID: \"332c81f3-8d79-40b0-b4ce-5e026e0ac87d\") " pod="default/busybox"
	Dec 28 06:29:14 addons-614829 kubelet[1271]: I1228 06:29:14.573408    1271 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.303856894 podStartE2EDuration="2.573389807s" podCreationTimestamp="2025-12-28 06:29:12 +0000 UTC" firstStartedPulling="2025-12-28 06:29:13.122288945 +0000 UTC m=+56.944781927" lastFinishedPulling="2025-12-28 06:29:14.391821843 +0000 UTC m=+58.214314840" observedRunningTime="2025-12-28 06:29:14.571995606 +0000 UTC m=+58.394488609" watchObservedRunningTime="2025-12-28 06:29:14.573389807 +0000 UTC m=+58.395882814"
	Dec 28 06:29:15 addons-614829 kubelet[1271]: E1228 06:29:15.516894    1271 prober_manager.go:209] "Readiness probe already exists for container" pod="ingress-nginx/ingress-nginx-controller-7847b5c79c-h9vsn" containerName="controller"
	Dec 28 06:29:20 addons-614829 kubelet[1271]: E1228 06:29:20.067200    1271 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:46930->127.0.0.1:34893: write tcp 127.0.0.1:46930->127.0.0.1:34893: write: broken pipe
	Dec 28 06:29:20 addons-614829 kubelet[1271]: I1228 06:29:20.872149    1271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxxrs\" (UniqueName: \"kubernetes.io/projected/a26a6959-e6b2-443c-89f1-e370d73e056a-kube-api-access-cxxrs\") pod \"nginx\" (UID: \"a26a6959-e6b2-443c-89f1-e370d73e056a\") " pod="default/nginx"
	Dec 28 06:29:20 addons-614829 kubelet[1271]: I1228 06:29:20.872231    1271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/a26a6959-e6b2-443c-89f1-e370d73e056a-gcp-creds\") pod \"nginx\" (UID: \"a26a6959-e6b2-443c-89f1-e370d73e056a\") " pod="default/nginx"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1228 06:29:21.310110   19026 logs.go:279] Failed to list containers for "kube-apiserver": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:29:21Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:29:21.372689   19026 logs.go:279] Failed to list containers for "etcd": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:29:21Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:29:21.438273   19026 logs.go:279] Failed to list containers for "coredns": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:29:21Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:29:21.518813   19026 logs.go:279] Failed to list containers for "kube-scheduler": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:29:21Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:29:21.606799   19026 logs.go:279] Failed to list containers for "kube-proxy": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:29:21Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:29:21.690091   19026 logs.go:279] Failed to list containers for "kube-controller-manager": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:29:21Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:29:21.775800   19026 logs.go:279] Failed to list containers for "kindnet": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:29:21Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:29:21.851411   19026 logs.go:279] Failed to list containers for "gcp-auth": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:29:21Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:29:21.921561   19026 logs.go:279] Failed to list containers for "controller_ingress": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:29:21Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:29:21.991827   19026 logs.go:279] Failed to list containers for "storage-provisioner": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:29:21Z" level=error msg="open /run/runc: no such file or directory"

                                                
                                                
** /stderr **
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-614829 -n addons-614829
helpers_test.go:270: (dbg) Run:  kubectl --context addons-614829 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: gcp-auth-certs-patch-9qgmn ingress-nginx-admission-create-ffflb ingress-nginx-admission-patch-fhxfk registry-creds-567fb78d95-dbpdf
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-614829 describe pod gcp-auth-certs-patch-9qgmn ingress-nginx-admission-create-ffflb ingress-nginx-admission-patch-fhxfk registry-creds-567fb78d95-dbpdf
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-614829 describe pod gcp-auth-certs-patch-9qgmn ingress-nginx-admission-create-ffflb ingress-nginx-admission-patch-fhxfk registry-creds-567fb78d95-dbpdf: exit status 1 (60.819528ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "gcp-auth-certs-patch-9qgmn" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-ffflb" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-fhxfk" not found
	Error from server (NotFound): pods "registry-creds-567fb78d95-dbpdf" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-614829 describe pod gcp-auth-certs-patch-9qgmn ingress-nginx-admission-create-ffflb ingress-nginx-admission-patch-fhxfk registry-creds-567fb78d95-dbpdf: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-614829 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-614829 addons disable headlamp --alsologtostderr -v=1: exit status 11 (255.141844ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:29:22.883882   19983 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:29:22.884057   19983 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:29:22.884070   19983 out.go:374] Setting ErrFile to fd 2...
	I1228 06:29:22.884076   19983 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:29:22.884280   19983 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:29:22.884538   19983 mustload.go:66] Loading cluster: addons-614829
	I1228 06:29:22.884834   19983 config.go:182] Loaded profile config "addons-614829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:29:22.884851   19983 addons.go:622] checking whether the cluster is paused
	I1228 06:29:22.884935   19983 config.go:182] Loaded profile config "addons-614829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:29:22.884947   19983 host.go:66] Checking if "addons-614829" exists ...
	I1228 06:29:22.885311   19983 cli_runner.go:164] Run: docker container inspect addons-614829 --format={{.State.Status}}
	I1228 06:29:22.903520   19983 ssh_runner.go:195] Run: systemctl --version
	I1228 06:29:22.903582   19983 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-614829
	I1228 06:29:22.921716   19983 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/addons-614829/id_rsa Username:docker}
	I1228 06:29:23.011948   19983 ssh_runner.go:195] Run: sudo crio config
	I1228 06:29:23.064731   19983 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:29:23.078393   19983 out.go:203] 
	W1228 06:29:23.079597   19983 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:29:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:29:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1228 06:29:23.079627   19983 out.go:285] * 
	* 
	W1228 06:29:23.080336   19983 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1228 06:29:23.081508   19983 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-614829 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.76s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5649ccbc87-5jklq" [adc72809-761a-4573-bbdd-522027a7cd9b] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003530658s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-614829 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-614829 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (251.766877ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:29:35.951801   21594 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:29:35.952096   21594 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:29:35.952105   21594 out.go:374] Setting ErrFile to fd 2...
	I1228 06:29:35.952109   21594 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:29:35.952307   21594 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:29:35.952543   21594 mustload.go:66] Loading cluster: addons-614829
	I1228 06:29:35.952830   21594 config.go:182] Loaded profile config "addons-614829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:29:35.952847   21594 addons.go:622] checking whether the cluster is paused
	I1228 06:29:35.952933   21594 config.go:182] Loaded profile config "addons-614829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:29:35.952949   21594 host.go:66] Checking if "addons-614829" exists ...
	I1228 06:29:35.953339   21594 cli_runner.go:164] Run: docker container inspect addons-614829 --format={{.State.Status}}
	I1228 06:29:35.970906   21594 ssh_runner.go:195] Run: systemctl --version
	I1228 06:29:35.970952   21594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-614829
	I1228 06:29:35.987936   21594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/addons-614829/id_rsa Username:docker}
	I1228 06:29:36.076700   21594 ssh_runner.go:195] Run: sudo crio config
	I1228 06:29:36.129785   21594 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:29:36.143668   21594 out.go:203] 
	W1228 06:29:36.144953   21594 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:29:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:29:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1228 06:29:36.144973   21594 out.go:285] * 
	* 
	W1228 06:29:36.145680   21594 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1228 06:29:36.146784   21594 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-614829 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.12s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-614829 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-614829 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-614829 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-614829 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-614829 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-614829 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-614829 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [8752f000-de00-4ede-9533-4c946ec72f34] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [8752f000-de00-4ede-9533-4c946ec72f34] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [8752f000-de00-4ede-9533-4c946ec72f34] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004001154s
addons_test.go:969: (dbg) Run:  kubectl --context addons-614829 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-614829 ssh "cat /opt/local-path-provisioner/pvc-fda575a8-f5f4-4c82-964b-d99cf6874ae2_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-614829 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-614829 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-614829 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-614829 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (254.542228ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:29:36.979916   21882 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:29:36.980101   21882 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:29:36.980111   21882 out.go:374] Setting ErrFile to fd 2...
	I1228 06:29:36.980115   21882 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:29:36.980334   21882 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:29:36.980590   21882 mustload.go:66] Loading cluster: addons-614829
	I1228 06:29:36.980883   21882 config.go:182] Loaded profile config "addons-614829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:29:36.980904   21882 addons.go:622] checking whether the cluster is paused
	I1228 06:29:36.981002   21882 config.go:182] Loaded profile config "addons-614829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:29:36.981024   21882 host.go:66] Checking if "addons-614829" exists ...
	I1228 06:29:36.981413   21882 cli_runner.go:164] Run: docker container inspect addons-614829 --format={{.State.Status}}
	I1228 06:29:36.999888   21882 ssh_runner.go:195] Run: systemctl --version
	I1228 06:29:36.999958   21882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-614829
	I1228 06:29:37.016685   21882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/addons-614829/id_rsa Username:docker}
	I1228 06:29:37.105272   21882 ssh_runner.go:195] Run: sudo crio config
	I1228 06:29:37.157333   21882 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:29:37.171794   21882 out.go:203] 
	W1228 06:29:37.172991   21882 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:29:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:29:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1228 06:29:37.173016   21882 out.go:285] * 
	* 
	W1228 06:29:37.173757   21882 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1228 06:29:37.175210   21882 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-614829 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.12s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-hshng" [3c12654c-1724-4691-ab9d-82822bb409b5] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.002678448s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-614829 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-614829 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (249.433553ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:29:33.437889   21215 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:29:33.438170   21215 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:29:33.438179   21215 out.go:374] Setting ErrFile to fd 2...
	I1228 06:29:33.438183   21215 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:29:33.438384   21215 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:29:33.438642   21215 mustload.go:66] Loading cluster: addons-614829
	I1228 06:29:33.438923   21215 config.go:182] Loaded profile config "addons-614829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:29:33.438938   21215 addons.go:622] checking whether the cluster is paused
	I1228 06:29:33.439021   21215 config.go:182] Loaded profile config "addons-614829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:29:33.439049   21215 host.go:66] Checking if "addons-614829" exists ...
	I1228 06:29:33.439429   21215 cli_runner.go:164] Run: docker container inspect addons-614829 --format={{.State.Status}}
	I1228 06:29:33.456836   21215 ssh_runner.go:195] Run: systemctl --version
	I1228 06:29:33.456899   21215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-614829
	I1228 06:29:33.474234   21215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/addons-614829/id_rsa Username:docker}
	I1228 06:29:33.563611   21215 ssh_runner.go:195] Run: sudo crio config
	I1228 06:29:33.613748   21215 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:29:33.628286   21215 out.go:203] 
	W1228 06:29:33.629785   21215 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:29:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:29:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1228 06:29:33.629809   21215 out.go:285] * 
	* 
	W1228 06:29:33.630552   21215 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1228 06:29:33.632072   21215 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-614829 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.25s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-7bcf5795cd-tqdfs" [4098256e-7429-4057-b506-4dd1d1b32748] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003260417s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-614829 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-614829 addons disable yakd --alsologtostderr -v=1: exit status 11 (251.293591ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:29:30.694142   21042 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:29:30.694326   21042 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:29:30.694338   21042 out.go:374] Setting ErrFile to fd 2...
	I1228 06:29:30.694342   21042 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:29:30.694551   21042 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:29:30.694782   21042 mustload.go:66] Loading cluster: addons-614829
	I1228 06:29:30.695090   21042 config.go:182] Loaded profile config "addons-614829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:29:30.695108   21042 addons.go:622] checking whether the cluster is paused
	I1228 06:29:30.695192   21042 config.go:182] Loaded profile config "addons-614829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:29:30.695203   21042 host.go:66] Checking if "addons-614829" exists ...
	I1228 06:29:30.695552   21042 cli_runner.go:164] Run: docker container inspect addons-614829 --format={{.State.Status}}
	I1228 06:29:30.713924   21042 ssh_runner.go:195] Run: systemctl --version
	I1228 06:29:30.713974   21042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-614829
	I1228 06:29:30.732286   21042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/addons-614829/id_rsa Username:docker}
	I1228 06:29:30.821636   21042 ssh_runner.go:195] Run: sudo crio config
	I1228 06:29:30.870016   21042 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:29:30.884272   21042 out.go:203] 
	W1228 06:29:30.885400   21042 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:29:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:29:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1228 06:29:30.885414   21042 out.go:285] * 
	* 
	W1228 06:29:30.886086   21042 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1228 06:29:30.887223   21042 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-614829 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.26s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.3s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:353: "amd-gpu-device-plugin-8bks6" [b3223ec7-1e81-4305-9361-cde1ebc1cf93] Running
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003359689s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-614829 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-614829 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (293.057661ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:29:28.151208   20287 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:29:28.151725   20287 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:29:28.151740   20287 out.go:374] Setting ErrFile to fd 2...
	I1228 06:29:28.151746   20287 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:29:28.152045   20287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:29:28.152447   20287 mustload.go:66] Loading cluster: addons-614829
	I1228 06:29:28.152807   20287 config.go:182] Loaded profile config "addons-614829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:29:28.152829   20287 addons.go:622] checking whether the cluster is paused
	I1228 06:29:28.152919   20287 config.go:182] Loaded profile config "addons-614829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:29:28.152930   20287 host.go:66] Checking if "addons-614829" exists ...
	I1228 06:29:28.153291   20287 cli_runner.go:164] Run: docker container inspect addons-614829 --format={{.State.Status}}
	I1228 06:29:28.176877   20287 ssh_runner.go:195] Run: systemctl --version
	I1228 06:29:28.176981   20287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-614829
	I1228 06:29:28.202461   20287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/addons-614829/id_rsa Username:docker}
	I1228 06:29:28.295333   20287 ssh_runner.go:195] Run: sudo crio config
	I1228 06:29:28.354377   20287 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:29:28.372429   20287 out.go:203] 
	W1228 06:29:28.373713   20287 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:29:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:29:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1228 06:29:28.373730   20287 out.go:285] * 
	* 
	W1228 06:29:28.374545   20287 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1228 06:29:28.375869   20287 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-614829 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (5.30s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.02s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-995051 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-995051 --output=json --user=testUser: exit status 80 (2.018780933s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e8bc07ed-7393-4f66-af03-78e45283bcc6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-995051 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"dd11f8d5-17db-454f-bf16-541676ac6115","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc --root /run/runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-28T06:40:51Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"9b8ed27c-7329-46fd-b0a7-0fb0f3f288f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-995051 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.02s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.94s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-995051 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-995051 --output=json --user=testUser: exit status 80 (1.939213147s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"acbfc05c-abba-4f04-aea6-2c6404f236d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-995051 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"3b03c8d7-873d-4aac-869a-c9b470383e27","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-28T06:40:53Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"b050e56f-9bc7-43ed-a029-484cd70c5095","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-995051 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.94s)

                                                
                                    
x
+
TestPause/serial/Pause (6.88s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-407564 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-407564 --alsologtostderr -v=5: exit status 80 (2.32172788s)

                                                
                                                
-- stdout --
	* Pausing node pause-407564 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:51:39.538236  181774 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:51:39.538548  181774 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:51:39.538562  181774 out.go:374] Setting ErrFile to fd 2...
	I1228 06:51:39.538569  181774 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:51:39.538843  181774 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:51:39.539137  181774 out.go:368] Setting JSON to false
	I1228 06:51:39.539155  181774 mustload.go:66] Loading cluster: pause-407564
	I1228 06:51:39.539518  181774 config.go:182] Loaded profile config "pause-407564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:51:39.539902  181774 cli_runner.go:164] Run: docker container inspect pause-407564 --format={{.State.Status}}
	I1228 06:51:39.559014  181774 host.go:66] Checking if "pause-407564" exists ...
	I1228 06:51:39.559382  181774 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:51:39.619050  181774 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:81 OomKillDisable:false NGoroutines:86 SystemTime:2025-12-28 06:51:39.609336044 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:51:39.619597  181774 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22351/minikube-v1.37.0-1766883634-22351-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766883634-22351/minikube-v1.37.0-1766883634-22351-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766883634-22351-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:pause-407564 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true
) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1228 06:51:39.621614  181774 out.go:179] * Pausing node pause-407564 ... 
	I1228 06:51:39.623887  181774 host.go:66] Checking if "pause-407564" exists ...
	I1228 06:51:39.624246  181774 ssh_runner.go:195] Run: systemctl --version
	I1228 06:51:39.624283  181774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-407564
	I1228 06:51:39.641731  181774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/pause-407564/id_rsa Username:docker}
	I1228 06:51:39.731507  181774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:51:39.743280  181774 pause.go:52] kubelet running: true
	I1228 06:51:39.743381  181774 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1228 06:51:39.885978  181774 ssh_runner.go:195] Run: sudo crio config
	I1228 06:51:39.948694  181774 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:51:39.962796  181774 retry.go:84] will retry after 200ms: list running: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:51:39Z" level=error msg="open /run/runc: no such file or directory"
	I1228 06:51:40.131408  181774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:51:40.145229  181774 pause.go:52] kubelet running: false
	I1228 06:51:40.145291  181774 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1228 06:51:40.277099  181774 ssh_runner.go:195] Run: sudo crio config
	I1228 06:51:40.343375  181774 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:51:40.894969  181774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:51:40.909272  181774 pause.go:52] kubelet running: false
	I1228 06:51:40.909334  181774 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1228 06:51:41.047556  181774 ssh_runner.go:195] Run: sudo crio config
	I1228 06:51:41.104618  181774 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:51:41.577946  181774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:51:41.591174  181774 pause.go:52] kubelet running: false
	I1228 06:51:41.591236  181774 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1228 06:51:41.721059  181774 ssh_runner.go:195] Run: sudo crio config
	I1228 06:51:41.779269  181774 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:51:41.794171  181774 out.go:203] 
	W1228 06:51:41.795298  181774 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:51:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:51:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1228 06:51:41.795314  181774 out.go:285] * 
	* 
	W1228 06:51:41.796975  181774 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1228 06:51:41.798116  181774 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-407564 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-407564
helpers_test.go:244: (dbg) docker inspect pause-407564:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "153cbe92cf33027aa2c65c31a2a3c766ed757b353b70178a3333c61eb21d6e92",
	        "Created": "2025-12-28T06:50:53.802995886Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 164672,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-28T06:50:53.872508733Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8b8cccb9afb2a57c3d011fcf33e0403b1551aa7036e30b12a395646869801935",
	        "ResolvConfPath": "/var/lib/docker/containers/153cbe92cf33027aa2c65c31a2a3c766ed757b353b70178a3333c61eb21d6e92/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/153cbe92cf33027aa2c65c31a2a3c766ed757b353b70178a3333c61eb21d6e92/hostname",
	        "HostsPath": "/var/lib/docker/containers/153cbe92cf33027aa2c65c31a2a3c766ed757b353b70178a3333c61eb21d6e92/hosts",
	        "LogPath": "/var/lib/docker/containers/153cbe92cf33027aa2c65c31a2a3c766ed757b353b70178a3333c61eb21d6e92/153cbe92cf33027aa2c65c31a2a3c766ed757b353b70178a3333c61eb21d6e92-json.log",
	        "Name": "/pause-407564",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-407564:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-407564",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "153cbe92cf33027aa2c65c31a2a3c766ed757b353b70178a3333c61eb21d6e92",
	                "LowerDir": "/var/lib/docker/overlay2/8f85a186f328f2541518c5543cff9e2a03ffb99875bff737cb8acecc4cf5953e-init/diff:/var/lib/docker/overlay2/69e554713d6cc3cb33e7ea5f93430536a8ca0db38320574d3719c26f00b2f62c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8f85a186f328f2541518c5543cff9e2a03ffb99875bff737cb8acecc4cf5953e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8f85a186f328f2541518c5543cff9e2a03ffb99875bff737cb8acecc4cf5953e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8f85a186f328f2541518c5543cff9e2a03ffb99875bff737cb8acecc4cf5953e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-407564",
	                "Source": "/var/lib/docker/volumes/pause-407564/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-407564",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-407564",
	                "name.minikube.sigs.k8s.io": "pause-407564",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "7947dc054d214fa07d646acfbd43cf9d2e1e8489149f09d1efdf6b9f759152c0",
	            "SandboxKey": "/var/run/docker/netns/7947dc054d21",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32968"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32969"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32972"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32970"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32971"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-407564": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6dc37f03eb7f1110cc6d5d5f5b317ab3bfa04189869300294b6a9b54e0f5047a",
	                    "EndpointID": "51e3a9c31883848b7895718e2e4efe959e423c6735c74f87afc2c39a5c72630b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "0e:a5:db:4a:f3:31",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-407564",
	                        "153cbe92cf33"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-407564 -n pause-407564
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-407564 -n pause-407564: exit status 2 (363.377519ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-407564 logs -n 25
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                           ARGS                                                                                                            │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-847755 --schedule 15s -v=5 --alsologtostderr                                                                                                                                                            │ scheduled-stop-847755       │ jenkins │ v1.37.0 │ 28 Dec 25 06:49 UTC │                     │
	│ stop    │ -p scheduled-stop-847755 --schedule 15s -v=5 --alsologtostderr                                                                                                                                                            │ scheduled-stop-847755       │ jenkins │ v1.37.0 │ 28 Dec 25 06:49 UTC │                     │
	│ stop    │ -p scheduled-stop-847755 --schedule 15s -v=5 --alsologtostderr                                                                                                                                                            │ scheduled-stop-847755       │ jenkins │ v1.37.0 │ 28 Dec 25 06:49 UTC │                     │
	│ stop    │ -p scheduled-stop-847755 --cancel-scheduled                                                                                                                                                                               │ scheduled-stop-847755       │ jenkins │ v1.37.0 │ 28 Dec 25 06:49 UTC │ 28 Dec 25 06:49 UTC │
	│ stop    │ -p scheduled-stop-847755 --schedule 15s -v=5 --alsologtostderr                                                                                                                                                            │ scheduled-stop-847755       │ jenkins │ v1.37.0 │ 28 Dec 25 06:49 UTC │                     │
	│ stop    │ -p scheduled-stop-847755 --schedule 15s -v=5 --alsologtostderr                                                                                                                                                            │ scheduled-stop-847755       │ jenkins │ v1.37.0 │ 28 Dec 25 06:49 UTC │                     │
	│ stop    │ -p scheduled-stop-847755 --schedule 15s -v=5 --alsologtostderr                                                                                                                                                            │ scheduled-stop-847755       │ jenkins │ v1.37.0 │ 28 Dec 25 06:49 UTC │ 28 Dec 25 06:50 UTC │
	│ delete  │ -p scheduled-stop-847755                                                                                                                                                                                                  │ scheduled-stop-847755       │ jenkins │ v1.37.0 │ 28 Dec 25 06:50 UTC │ 28 Dec 25 06:50 UTC │
	│ start   │ -p insufficient-storage-614853 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                                                                                                          │ insufficient-storage-614853 │ jenkins │ v1.37.0 │ 28 Dec 25 06:50 UTC │                     │
	│ delete  │ -p insufficient-storage-614853                                                                                                                                                                                            │ insufficient-storage-614853 │ jenkins │ v1.37.0 │ 28 Dec 25 06:50 UTC │ 28 Dec 25 06:50 UTC │
	│ start   │ -p force-systemd-env-421965 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                │ force-systemd-env-421965    │ jenkins │ v1.37.0 │ 28 Dec 25 06:50 UTC │ 28 Dec 25 06:51 UTC │
	│ start   │ -p pause-407564 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                                                                                                 │ pause-407564                │ jenkins │ v1.37.0 │ 28 Dec 25 06:50 UTC │ 28 Dec 25 06:51 UTC │
	│ start   │ -p offline-crio-376432 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio                                                                                                         │ offline-crio-376432         │ jenkins │ v1.37.0 │ 28 Dec 25 06:50 UTC │ 28 Dec 25 06:51 UTC │
	│ start   │ -p stopped-upgrade-416029 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                                                                                                      │ stopped-upgrade-416029      │ jenkins │ v1.35.0 │ 28 Dec 25 06:50 UTC │ 28 Dec 25 06:51 UTC │
	│ delete  │ -p force-systemd-env-421965                                                                                                                                                                                               │ force-systemd-env-421965    │ jenkins │ v1.37.0 │ 28 Dec 25 06:51 UTC │ 28 Dec 25 06:51 UTC │
	│ start   │ -p force-systemd-flag-095404 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                               │ force-systemd-flag-095404   │ jenkins │ v1.37.0 │ 28 Dec 25 06:51 UTC │ 28 Dec 25 06:51 UTC │
	│ stop    │ stopped-upgrade-416029 stop                                                                                                                                                                                               │ stopped-upgrade-416029      │ jenkins │ v1.35.0 │ 28 Dec 25 06:51 UTC │ 28 Dec 25 06:51 UTC │
	│ start   │ -p stopped-upgrade-416029 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                  │ stopped-upgrade-416029      │ jenkins │ v1.37.0 │ 28 Dec 25 06:51 UTC │                     │
	│ delete  │ -p offline-crio-376432                                                                                                                                                                                                    │ offline-crio-376432         │ jenkins │ v1.37.0 │ 28 Dec 25 06:51 UTC │ 28 Dec 25 06:51 UTC │
	│ start   │ -p cert-expiration-623987 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                    │ cert-expiration-623987      │ jenkins │ v1.37.0 │ 28 Dec 25 06:51 UTC │                     │
	│ start   │ -p pause-407564 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                          │ pause-407564                │ jenkins │ v1.37.0 │ 28 Dec 25 06:51 UTC │ 28 Dec 25 06:51 UTC │
	│ ssh     │ force-systemd-flag-095404 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                      │ force-systemd-flag-095404   │ jenkins │ v1.37.0 │ 28 Dec 25 06:51 UTC │ 28 Dec 25 06:51 UTC │
	│ delete  │ -p force-systemd-flag-095404                                                                                                                                                                                              │ force-systemd-flag-095404   │ jenkins │ v1.37.0 │ 28 Dec 25 06:51 UTC │ 28 Dec 25 06:51 UTC │
	│ pause   │ -p pause-407564 --alsologtostderr -v=5                                                                                                                                                                                    │ pause-407564                │ jenkins │ v1.37.0 │ 28 Dec 25 06:51 UTC │                     │
	│ start   │ -p cert-options-943497 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio │ cert-options-943497         │ jenkins │ v1.37.0 │ 28 Dec 25 06:51 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 06:51:40
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 06:51:40.822819  182481 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:51:40.823056  182481 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:51:40.823060  182481 out.go:374] Setting ErrFile to fd 2...
	I1228 06:51:40.823063  182481 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:51:40.823269  182481 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:51:40.823752  182481 out.go:368] Setting JSON to false
	I1228 06:51:40.824820  182481 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2053,"bootTime":1766902648,"procs":295,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1228 06:51:40.824872  182481 start.go:143] virtualization: kvm guest
	I1228 06:51:40.829746  182481 out.go:179] * [cert-options-943497] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1228 06:51:40.831584  182481 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 06:51:40.831620  182481 notify.go:221] Checking for updates...
	I1228 06:51:40.834176  182481 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 06:51:40.835464  182481 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:51:40.837067  182481 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-5550/.minikube
	I1228 06:51:40.838294  182481 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1228 06:51:40.839392  182481 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 06:51:40.841018  182481 config.go:182] Loaded profile config "cert-expiration-623987": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:51:40.841171  182481 config.go:182] Loaded profile config "pause-407564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:51:40.841242  182481 config.go:182] Loaded profile config "stopped-upgrade-416029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1228 06:51:40.841309  182481 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 06:51:40.866213  182481 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1228 06:51:40.866280  182481 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:51:40.925618  182481 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-28 06:51:40.915452393 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:51:40.925743  182481 docker.go:319] overlay module found
	I1228 06:51:40.930567  182481 out.go:179] * Using the docker driver based on user configuration
	I1228 06:51:40.931900  182481 start.go:309] selected driver: docker
	I1228 06:51:40.931908  182481 start.go:928] validating driver "docker" against <nil>
	I1228 06:51:40.931917  182481 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 06:51:40.932557  182481 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:51:40.998540  182481 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-28 06:51:40.987967705 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:51:40.998671  182481 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1228 06:51:40.998866  182481 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1228 06:51:41.000886  182481 out.go:179] * Using Docker driver with root privileges
	I1228 06:51:41.002237  182481 cni.go:84] Creating CNI manager for ""
	I1228 06:51:41.002297  182481 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1228 06:51:41.002303  182481 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1228 06:51:41.002370  182481 start.go:353] cluster config:
	{Name:cert-options-943497 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:cert-options-943497 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.
0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInter
val:1m0s Rosetta:false}
	I1228 06:51:41.008092  182481 out.go:179] * Starting "cert-options-943497" primary control-plane node in "cert-options-943497" cluster
	I1228 06:51:41.009487  182481 cache.go:134] Beginning downloading kic base image for docker with crio
	I1228 06:51:41.010679  182481 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 06:51:41.011804  182481 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1228 06:51:41.011834  182481 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22352-5550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1228 06:51:41.011856  182481 cache.go:65] Caching tarball of preloaded images
	I1228 06:51:41.011895  182481 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 06:51:41.011953  182481 preload.go:251] Found /home/jenkins/minikube-integration/22352-5550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1228 06:51:41.011962  182481 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1228 06:51:41.012106  182481 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/cert-options-943497/config.json ...
	I1228 06:51:41.012127  182481 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/cert-options-943497/config.json: {Name:mk589390f747a223db49ee198755d1bd874b64c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:51:41.038620  182481 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 06:51:41.038630  182481 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
	I1228 06:51:41.038648  182481 cache.go:243] Successfully downloaded all kic artifacts
	I1228 06:51:41.038692  182481 start.go:360] acquireMachinesLock for cert-options-943497: {Name:mka040e8e15d71e208e33f95def2bee8478a80e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:51:41.038797  182481 start.go:364] duration metric: took 91.452µs to acquireMachinesLock for "cert-options-943497"
	I1228 06:51:41.038823  182481 start.go:93] Provisioning new machine with config: &{Name:cert-options-943497 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:cert-options-943497 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8555 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1228 06:51:41.038897  182481 start.go:125] createHost starting for "" (driver="docker")
	I1228 06:51:40.030383  174872 api_server.go:315] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1228 06:51:40.030436  174872 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	
	
	==> CRI-O <==
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.28693154Z" level=info msg="RDT not available in the host system"
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.28694133Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.287907711Z" level=info msg="Conmon does support the --sync option"
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.287938144Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.28795653Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.28886648Z" level=info msg="Conmon does support the --sync option"
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.288883258Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.29444553Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.294472241Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.295005525Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n        container_create_timeout = 240\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n        container_create_timeout = 240\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"en
forcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [cri
o.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.295423948Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.295477881Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.363338949Z" level=info msg="Got pod network &{Name:coredns-7d764666f9-b6b9t Namespace:kube-system ID:6c515a2cb4c0a2dd806a680a1b572df17142f6ba7a40f2921d5540ce900754ed UID:8cc7b946-c84e-43bb-aa1d-48fb4dfd9862 NetNS:/var/run/netns/be1c5c5b-8286-496c-afcb-5dd3a51c237c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00059a2a0}] Aliases:map[]}"
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.363570357Z" level=info msg="Checking pod kube-system_coredns-7d764666f9-b6b9t for CNI network kindnet (type=ptp)"
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.364421106Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.364447895Z" level=info msg="Starting seccomp notifier watcher"
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.364504896Z" level=info msg="Create NRI interface"
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.364615432Z" level=info msg="built-in NRI default validator is disabled"
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.364625666Z" level=info msg="runtime interface created"
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.364641822Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.3646497Z" level=info msg="runtime interface starting up..."
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.364657308Z" level=info msg="starting plugins..."
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.364671214Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.364978534Z" level=info msg="No systemd watchdog enabled"
	Dec 28 06:51:36 pause-407564 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	bd2e56774124a       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                     13 seconds ago      Running             coredns                   0                   6c515a2cb4c0a       coredns-7d764666f9-b6b9t               kube-system
	f593198b473cd       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27   24 seconds ago      Running             kindnet-cni               0                   d2c29c4ecdcac       kindnet-dmmg7                          kube-system
	4c2b6b9ecfcca       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                     26 seconds ago      Running             kube-proxy                0                   6a8ceda656991       kube-proxy-jpqdf                       kube-system
	bbd4473aa7175       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                     37 seconds ago      Running             etcd                      0                   9dfc0d8546269       etcd-pause-407564                      kube-system
	1f5dd65dc369c       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                     37 seconds ago      Running             kube-scheduler            0                   c483659cb15a6       kube-scheduler-pause-407564            kube-system
	d0301ed9f8af7       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                     37 seconds ago      Running             kube-controller-manager   0                   2c9e3a70fd2bf       kube-controller-manager-pause-407564   kube-system
	2408c603d45ac       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                     37 seconds ago      Running             kube-apiserver            0                   f0f02c6030ed4       kube-apiserver-pause-407564            kube-system
	
	
	==> describe nodes <==
	Name:               pause-407564
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-407564
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba
	                    minikube.k8s.io/name=pause-407564
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_28T06_51_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Dec 2025 06:51:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-407564
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 28 Dec 2025 06:51:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 28 Dec 2025 06:51:28 +0000   Sun, 28 Dec 2025 06:51:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 28 Dec 2025 06:51:28 +0000   Sun, 28 Dec 2025 06:51:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 28 Dec 2025 06:51:28 +0000   Sun, 28 Dec 2025 06:51:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 28 Dec 2025 06:51:28 +0000   Sun, 28 Dec 2025 06:51:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-407564
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 493159aea3d8b8768b108b926950835d
	  System UUID:                b271aa30-0f36-44e4-9d82-0d494fbd379c
	  Boot ID:                    e7a1d175-ccf2-4135-b9c7-3a9f70f4c4af
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7d764666f9-b6b9t                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-pause-407564                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-dmmg7                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-pause-407564             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-pause-407564    200m (2%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-jpqdf                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-pause-407564             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  28s   node-controller  Node pause-407564 event: Registered Node pause-407564 in Controller
	
	
	==> dmesg <==
	[Dec28 06:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001811] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000998] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.386099] i8042: Warning: Keylock active
	[  +0.010472] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.485785] block sda: the capability attribute has been deprecated.
	[  +0.082391] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024584] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.071522] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 06:51:43 up 34 min,  0 user,  load average: 3.46, 1.85, 1.19
	Linux pause-407564 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 28 06:51:29 pause-407564 kubelet[1291]: I1228 06:51:29.003369    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zds24\" (UniqueName: \"kubernetes.io/projected/8cc7b946-c84e-43bb-aa1d-48fb4dfd9862-kube-api-access-zds24\") pod \"coredns-7d764666f9-b6b9t\" (UID: \"8cc7b946-c84e-43bb-aa1d-48fb4dfd9862\") " pod="kube-system/coredns-7d764666f9-b6b9t"
	Dec 28 06:51:29 pause-407564 kubelet[1291]: E1228 06:51:29.539793    1291 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-b6b9t" containerName="coredns"
	Dec 28 06:51:29 pause-407564 kubelet[1291]: I1228 06:51:29.570823    1291 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-b6b9t" podStartSLOduration=13.570801909 podStartE2EDuration="13.570801909s" podCreationTimestamp="2025-12-28 06:51:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-28 06:51:29.556537988 +0000 UTC m=+19.209608579" watchObservedRunningTime="2025-12-28 06:51:29.570801909 +0000 UTC m=+19.223872503"
	Dec 28 06:51:30 pause-407564 kubelet[1291]: E1228 06:51:30.542103    1291 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-b6b9t" containerName="coredns"
	Dec 28 06:51:30 pause-407564 kubelet[1291]: E1228 06:51:30.646851    1291 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-407564" containerName="kube-scheduler"
	Dec 28 06:51:31 pause-407564 kubelet[1291]: E1228 06:51:31.543897    1291 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-b6b9t" containerName="coredns"
	Dec 28 06:51:33 pause-407564 kubelet[1291]: W1228 06:51:33.546489    1291 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 28 06:51:33 pause-407564 kubelet[1291]: E1228 06:51:33.547202    1291 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Dec 28 06:51:33 pause-407564 kubelet[1291]: E1228 06:51:33.547277    1291 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 28 06:51:33 pause-407564 kubelet[1291]: E1228 06:51:33.547294    1291 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 28 06:51:33 pause-407564 kubelet[1291]: W1228 06:51:33.647597    1291 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 28 06:51:33 pause-407564 kubelet[1291]: W1228 06:51:33.776322    1291 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 28 06:51:34 pause-407564 kubelet[1291]: W1228 06:51:34.004252    1291 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 28 06:51:34 pause-407564 kubelet[1291]: W1228 06:51:34.376293    1291 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 28 06:51:34 pause-407564 kubelet[1291]: E1228 06:51:34.483417    1291 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Dec 28 06:51:34 pause-407564 kubelet[1291]: E1228 06:51:34.483556    1291 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 28 06:51:34 pause-407564 kubelet[1291]: E1228 06:51:34.483585    1291 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 28 06:51:34 pause-407564 kubelet[1291]: E1228 06:51:34.483604    1291 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 28 06:51:34 pause-407564 kubelet[1291]: E1228 06:51:34.548254    1291 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Dec 28 06:51:34 pause-407564 kubelet[1291]: E1228 06:51:34.548325    1291 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 28 06:51:34 pause-407564 kubelet[1291]: E1228 06:51:34.548344    1291 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 28 06:51:39 pause-407564 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 28 06:51:39 pause-407564 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 28 06:51:39 pause-407564 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 06:51:39 pause-407564 systemd[1]: kubelet.service: Consumed 1.307s CPU time.
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1228 06:51:42.552650  183358 logs.go:279] Failed to list containers for "kube-apiserver": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:51:42Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:51:42.613986  183358 logs.go:279] Failed to list containers for "etcd": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:51:42Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:51:42.685299  183358 logs.go:279] Failed to list containers for "coredns": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:51:42Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:51:42.753019  183358 logs.go:279] Failed to list containers for "kube-scheduler": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:51:42Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:51:42.816398  183358 logs.go:279] Failed to list containers for "kube-proxy": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:51:42Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:51:42.898632  183358 logs.go:279] Failed to list containers for "kube-controller-manager": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:51:42Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:51:42.970144  183358 logs.go:279] Failed to list containers for "kindnet": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:51:42Z" level=error msg="open /run/runc: no such file or directory"

                                                
                                                
** /stderr **
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-407564 -n pause-407564
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-407564 -n pause-407564: exit status 2 (347.713853ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-407564 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-407564
helpers_test.go:244: (dbg) docker inspect pause-407564:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "153cbe92cf33027aa2c65c31a2a3c766ed757b353b70178a3333c61eb21d6e92",
	        "Created": "2025-12-28T06:50:53.802995886Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 164672,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-28T06:50:53.872508733Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8b8cccb9afb2a57c3d011fcf33e0403b1551aa7036e30b12a395646869801935",
	        "ResolvConfPath": "/var/lib/docker/containers/153cbe92cf33027aa2c65c31a2a3c766ed757b353b70178a3333c61eb21d6e92/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/153cbe92cf33027aa2c65c31a2a3c766ed757b353b70178a3333c61eb21d6e92/hostname",
	        "HostsPath": "/var/lib/docker/containers/153cbe92cf33027aa2c65c31a2a3c766ed757b353b70178a3333c61eb21d6e92/hosts",
	        "LogPath": "/var/lib/docker/containers/153cbe92cf33027aa2c65c31a2a3c766ed757b353b70178a3333c61eb21d6e92/153cbe92cf33027aa2c65c31a2a3c766ed757b353b70178a3333c61eb21d6e92-json.log",
	        "Name": "/pause-407564",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-407564:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-407564",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "153cbe92cf33027aa2c65c31a2a3c766ed757b353b70178a3333c61eb21d6e92",
	                "LowerDir": "/var/lib/docker/overlay2/8f85a186f328f2541518c5543cff9e2a03ffb99875bff737cb8acecc4cf5953e-init/diff:/var/lib/docker/overlay2/69e554713d6cc3cb33e7ea5f93430536a8ca0db38320574d3719c26f00b2f62c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8f85a186f328f2541518c5543cff9e2a03ffb99875bff737cb8acecc4cf5953e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8f85a186f328f2541518c5543cff9e2a03ffb99875bff737cb8acecc4cf5953e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8f85a186f328f2541518c5543cff9e2a03ffb99875bff737cb8acecc4cf5953e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-407564",
	                "Source": "/var/lib/docker/volumes/pause-407564/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-407564",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-407564",
	                "name.minikube.sigs.k8s.io": "pause-407564",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "7947dc054d214fa07d646acfbd43cf9d2e1e8489149f09d1efdf6b9f759152c0",
	            "SandboxKey": "/var/run/docker/netns/7947dc054d21",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32968"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32969"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32972"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32970"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32971"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-407564": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6dc37f03eb7f1110cc6d5d5f5b317ab3bfa04189869300294b6a9b54e0f5047a",
	                    "EndpointID": "51e3a9c31883848b7895718e2e4efe959e423c6735c74f87afc2c39a5c72630b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "0e:a5:db:4a:f3:31",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-407564",
	                        "153cbe92cf33"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-407564 -n pause-407564
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-407564 -n pause-407564: exit status 2 (328.175169ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-407564 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-407564 logs -n 25: (1.94056278s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                           ARGS                                                                                                            │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-847755 --schedule 15s -v=5 --alsologtostderr                                                                                                                                                            │ scheduled-stop-847755       │ jenkins │ v1.37.0 │ 28 Dec 25 06:49 UTC │                     │
	│ stop    │ -p scheduled-stop-847755 --schedule 15s -v=5 --alsologtostderr                                                                                                                                                            │ scheduled-stop-847755       │ jenkins │ v1.37.0 │ 28 Dec 25 06:49 UTC │                     │
	│ stop    │ -p scheduled-stop-847755 --schedule 15s -v=5 --alsologtostderr                                                                                                                                                            │ scheduled-stop-847755       │ jenkins │ v1.37.0 │ 28 Dec 25 06:49 UTC │                     │
	│ stop    │ -p scheduled-stop-847755 --cancel-scheduled                                                                                                                                                                               │ scheduled-stop-847755       │ jenkins │ v1.37.0 │ 28 Dec 25 06:49 UTC │ 28 Dec 25 06:49 UTC │
	│ stop    │ -p scheduled-stop-847755 --schedule 15s -v=5 --alsologtostderr                                                                                                                                                            │ scheduled-stop-847755       │ jenkins │ v1.37.0 │ 28 Dec 25 06:49 UTC │                     │
	│ stop    │ -p scheduled-stop-847755 --schedule 15s -v=5 --alsologtostderr                                                                                                                                                            │ scheduled-stop-847755       │ jenkins │ v1.37.0 │ 28 Dec 25 06:49 UTC │                     │
	│ stop    │ -p scheduled-stop-847755 --schedule 15s -v=5 --alsologtostderr                                                                                                                                                            │ scheduled-stop-847755       │ jenkins │ v1.37.0 │ 28 Dec 25 06:49 UTC │ 28 Dec 25 06:50 UTC │
	│ delete  │ -p scheduled-stop-847755                                                                                                                                                                                                  │ scheduled-stop-847755       │ jenkins │ v1.37.0 │ 28 Dec 25 06:50 UTC │ 28 Dec 25 06:50 UTC │
	│ start   │ -p insufficient-storage-614853 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                                                                                                          │ insufficient-storage-614853 │ jenkins │ v1.37.0 │ 28 Dec 25 06:50 UTC │                     │
	│ delete  │ -p insufficient-storage-614853                                                                                                                                                                                            │ insufficient-storage-614853 │ jenkins │ v1.37.0 │ 28 Dec 25 06:50 UTC │ 28 Dec 25 06:50 UTC │
	│ start   │ -p force-systemd-env-421965 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                │ force-systemd-env-421965    │ jenkins │ v1.37.0 │ 28 Dec 25 06:50 UTC │ 28 Dec 25 06:51 UTC │
	│ start   │ -p pause-407564 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                                                                                                 │ pause-407564                │ jenkins │ v1.37.0 │ 28 Dec 25 06:50 UTC │ 28 Dec 25 06:51 UTC │
	│ start   │ -p offline-crio-376432 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio                                                                                                         │ offline-crio-376432         │ jenkins │ v1.37.0 │ 28 Dec 25 06:50 UTC │ 28 Dec 25 06:51 UTC │
	│ start   │ -p stopped-upgrade-416029 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                                                                                                      │ stopped-upgrade-416029      │ jenkins │ v1.35.0 │ 28 Dec 25 06:50 UTC │ 28 Dec 25 06:51 UTC │
	│ delete  │ -p force-systemd-env-421965                                                                                                                                                                                               │ force-systemd-env-421965    │ jenkins │ v1.37.0 │ 28 Dec 25 06:51 UTC │ 28 Dec 25 06:51 UTC │
	│ start   │ -p force-systemd-flag-095404 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                               │ force-systemd-flag-095404   │ jenkins │ v1.37.0 │ 28 Dec 25 06:51 UTC │ 28 Dec 25 06:51 UTC │
	│ stop    │ stopped-upgrade-416029 stop                                                                                                                                                                                               │ stopped-upgrade-416029      │ jenkins │ v1.35.0 │ 28 Dec 25 06:51 UTC │ 28 Dec 25 06:51 UTC │
	│ start   │ -p stopped-upgrade-416029 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                  │ stopped-upgrade-416029      │ jenkins │ v1.37.0 │ 28 Dec 25 06:51 UTC │                     │
	│ delete  │ -p offline-crio-376432                                                                                                                                                                                                    │ offline-crio-376432         │ jenkins │ v1.37.0 │ 28 Dec 25 06:51 UTC │ 28 Dec 25 06:51 UTC │
	│ start   │ -p cert-expiration-623987 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                    │ cert-expiration-623987      │ jenkins │ v1.37.0 │ 28 Dec 25 06:51 UTC │                     │
	│ start   │ -p pause-407564 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                          │ pause-407564                │ jenkins │ v1.37.0 │ 28 Dec 25 06:51 UTC │ 28 Dec 25 06:51 UTC │
	│ ssh     │ force-systemd-flag-095404 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                      │ force-systemd-flag-095404   │ jenkins │ v1.37.0 │ 28 Dec 25 06:51 UTC │ 28 Dec 25 06:51 UTC │
	│ delete  │ -p force-systemd-flag-095404                                                                                                                                                                                              │ force-systemd-flag-095404   │ jenkins │ v1.37.0 │ 28 Dec 25 06:51 UTC │ 28 Dec 25 06:51 UTC │
	│ pause   │ -p pause-407564 --alsologtostderr -v=5                                                                                                                                                                                    │ pause-407564                │ jenkins │ v1.37.0 │ 28 Dec 25 06:51 UTC │                     │
	│ start   │ -p cert-options-943497 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio │ cert-options-943497         │ jenkins │ v1.37.0 │ 28 Dec 25 06:51 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 06:51:40
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 06:51:40.822819  182481 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:51:40.823056  182481 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:51:40.823060  182481 out.go:374] Setting ErrFile to fd 2...
	I1228 06:51:40.823063  182481 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:51:40.823269  182481 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:51:40.823752  182481 out.go:368] Setting JSON to false
	I1228 06:51:40.824820  182481 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2053,"bootTime":1766902648,"procs":295,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1228 06:51:40.824872  182481 start.go:143] virtualization: kvm guest
	I1228 06:51:40.829746  182481 out.go:179] * [cert-options-943497] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1228 06:51:40.831584  182481 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 06:51:40.831620  182481 notify.go:221] Checking for updates...
	I1228 06:51:40.834176  182481 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 06:51:40.835464  182481 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:51:40.837067  182481 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-5550/.minikube
	I1228 06:51:40.838294  182481 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1228 06:51:40.839392  182481 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 06:51:40.841018  182481 config.go:182] Loaded profile config "cert-expiration-623987": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:51:40.841171  182481 config.go:182] Loaded profile config "pause-407564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:51:40.841242  182481 config.go:182] Loaded profile config "stopped-upgrade-416029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1228 06:51:40.841309  182481 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 06:51:40.866213  182481 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1228 06:51:40.866280  182481 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:51:40.925618  182481 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-28 06:51:40.915452393 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:51:40.925743  182481 docker.go:319] overlay module found
	I1228 06:51:40.930567  182481 out.go:179] * Using the docker driver based on user configuration
	I1228 06:51:40.931900  182481 start.go:309] selected driver: docker
	I1228 06:51:40.931908  182481 start.go:928] validating driver "docker" against <nil>
	I1228 06:51:40.931917  182481 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 06:51:40.932557  182481 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:51:40.998540  182481 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-28 06:51:40.987967705 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:51:40.998671  182481 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1228 06:51:40.998866  182481 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1228 06:51:41.000886  182481 out.go:179] * Using Docker driver with root privileges
	I1228 06:51:41.002237  182481 cni.go:84] Creating CNI manager for ""
	I1228 06:51:41.002297  182481 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1228 06:51:41.002303  182481 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1228 06:51:41.002370  182481 start.go:353] cluster config:
	{Name:cert-options-943497 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:cert-options-943497 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.
0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInter
val:1m0s Rosetta:false}
	I1228 06:51:41.008092  182481 out.go:179] * Starting "cert-options-943497" primary control-plane node in "cert-options-943497" cluster
	I1228 06:51:41.009487  182481 cache.go:134] Beginning downloading kic base image for docker with crio
	I1228 06:51:41.010679  182481 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 06:51:41.011804  182481 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1228 06:51:41.011834  182481 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22352-5550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1228 06:51:41.011856  182481 cache.go:65] Caching tarball of preloaded images
	I1228 06:51:41.011895  182481 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 06:51:41.011953  182481 preload.go:251] Found /home/jenkins/minikube-integration/22352-5550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1228 06:51:41.011962  182481 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1228 06:51:41.012106  182481 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/cert-options-943497/config.json ...
	I1228 06:51:41.012127  182481 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/cert-options-943497/config.json: {Name:mk589390f747a223db49ee198755d1bd874b64c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:51:41.038620  182481 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 06:51:41.038630  182481 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
	I1228 06:51:41.038648  182481 cache.go:243] Successfully downloaded all kic artifacts
	I1228 06:51:41.038692  182481 start.go:360] acquireMachinesLock for cert-options-943497: {Name:mka040e8e15d71e208e33f95def2bee8478a80e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:51:41.038797  182481 start.go:364] duration metric: took 91.452µs to acquireMachinesLock for "cert-options-943497"
	I1228 06:51:41.038823  182481 start.go:93] Provisioning new machine with config: &{Name:cert-options-943497 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:cert-options-943497 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8555 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1228 06:51:41.038897  182481 start.go:125] createHost starting for "" (driver="docker")
	I1228 06:51:40.030383  174872 api_server.go:315] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1228 06:51:40.030436  174872 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1228 06:51:40.739734  178310 cli_runner.go:164] Run: docker network inspect cert-expiration-623987 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 06:51:40.756847  178310 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1228 06:51:40.761117  178310 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 06:51:40.771493  178310 kubeadm.go:884] updating cluster {Name:cert-expiration-623987 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:cert-expiration-623987 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAge
ntPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 06:51:40.771608  178310 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1228 06:51:40.771656  178310 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 06:51:40.808753  178310 crio.go:631] all images are preloaded for cri-o runtime.
	I1228 06:51:40.808763  178310 crio.go:503] Images already preloaded, skipping extraction
	I1228 06:51:40.808799  178310 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 06:51:40.834848  178310 crio.go:631] all images are preloaded for cri-o runtime.
	I1228 06:51:40.834858  178310 cache_images.go:86] Images are preloaded, skipping loading
	I1228 06:51:40.834869  178310 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0 crio true true} ...
	I1228 06:51:40.834937  178310 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=cert-expiration-623987 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:cert-expiration-623987 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 06:51:40.834988  178310 ssh_runner.go:195] Run: crio config
	I1228 06:51:40.885562  178310 cni.go:84] Creating CNI manager for ""
	I1228 06:51:40.885578  178310 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1228 06:51:40.885597  178310 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1228 06:51:40.885625  178310 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-623987 NodeName:cert-expiration-623987 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 06:51:40.885769  178310 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-623987"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 06:51:40.885833  178310 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1228 06:51:40.895372  178310 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 06:51:40.895427  178310 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 06:51:40.904801  178310 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1228 06:51:40.918392  178310 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 06:51:40.937423  178310 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1228 06:51:40.959115  178310 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1228 06:51:40.964508  178310 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 06:51:40.977069  178310 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:51:41.075335  178310 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:51:41.097093  178310 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/cert-expiration-623987 for IP: 192.168.94.2
	I1228 06:51:41.097105  178310 certs.go:195] generating shared ca certs ...
	I1228 06:51:41.097122  178310 certs.go:227] acquiring lock for ca certs: {Name:mk77ee411d20e2d367f536371cb4debf1ce5f664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:51:41.097290  178310 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-5550/.minikube/ca.key
	I1228 06:51:41.097327  178310 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.key
	I1228 06:51:41.097333  178310 certs.go:257] generating profile certs ...
	I1228 06:51:41.097397  178310 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/cert-expiration-623987/client.key
	I1228 06:51:41.097407  178310 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/cert-expiration-623987/client.crt with IP's: []
	I1228 06:51:41.330002  178310 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/cert-expiration-623987/client.crt ...
	I1228 06:51:41.330018  178310 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/cert-expiration-623987/client.crt: {Name:mk25a7dff554107f25e32aa85e05fd22dd7d5125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:51:41.330201  178310 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/cert-expiration-623987/client.key ...
	I1228 06:51:41.330212  178310 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/cert-expiration-623987/client.key: {Name:mk244c1cb13dbc4371b4a9df015a1a3bade9dc1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:51:41.330324  178310 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/cert-expiration-623987/apiserver.key.dc960fc6
	I1228 06:51:41.330337  178310 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/cert-expiration-623987/apiserver.crt.dc960fc6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1228 06:51:41.373612  178310 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/cert-expiration-623987/apiserver.crt.dc960fc6 ...
	I1228 06:51:41.373627  178310 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/cert-expiration-623987/apiserver.crt.dc960fc6: {Name:mkba38f81a49ccee322352fa2d46238dd25e6a11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:51:41.373772  178310 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/cert-expiration-623987/apiserver.key.dc960fc6 ...
	I1228 06:51:41.373779  178310 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/cert-expiration-623987/apiserver.key.dc960fc6: {Name:mk34fa6cfd3b6d274aa536cdc0778f4470a4f741 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:51:41.373848  178310 certs.go:382] copying /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/cert-expiration-623987/apiserver.crt.dc960fc6 -> /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/cert-expiration-623987/apiserver.crt
	I1228 06:51:41.373921  178310 certs.go:386] copying /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/cert-expiration-623987/apiserver.key.dc960fc6 -> /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/cert-expiration-623987/apiserver.key
	I1228 06:51:41.373973  178310 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/cert-expiration-623987/proxy-client.key
	I1228 06:51:41.373983  178310 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/cert-expiration-623987/proxy-client.crt with IP's: []
	I1228 06:51:41.467947  178310 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/cert-expiration-623987/proxy-client.crt ...
	I1228 06:51:41.467963  178310 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/cert-expiration-623987/proxy-client.crt: {Name:mk277c823ee1375ebb99f8b764a4f28cf9183423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:51:41.468123  178310 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/cert-expiration-623987/proxy-client.key ...
	I1228 06:51:41.468132  178310 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/cert-expiration-623987/proxy-client.key: {Name:mk316f30bb66aab1f951e66a1c92e9d18a3138fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:51:41.468320  178310 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076.pem (1338 bytes)
	W1228 06:51:41.468352  178310 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076_empty.pem, impossibly tiny 0 bytes
	I1228 06:51:41.468361  178310 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem (1675 bytes)
	I1228 06:51:41.468400  178310 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem (1082 bytes)
	I1228 06:51:41.468424  178310 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem (1123 bytes)
	I1228 06:51:41.468445  178310 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem (1679 bytes)
	I1228 06:51:41.468484  178310 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem (1708 bytes)
	I1228 06:51:41.469145  178310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 06:51:41.488451  178310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1228 06:51:41.507126  178310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 06:51:41.536627  178310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 06:51:41.560239  178310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/cert-expiration-623987/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1228 06:51:41.578799  178310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/cert-expiration-623987/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1228 06:51:41.598065  178310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/cert-expiration-623987/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 06:51:41.617632  178310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/cert-expiration-623987/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1228 06:51:41.639983  178310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 06:51:41.661191  178310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076.pem --> /usr/share/ca-certificates/9076.pem (1338 bytes)
	I1228 06:51:41.680214  178310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem --> /usr/share/ca-certificates/90762.pem (1708 bytes)
	I1228 06:51:41.698779  178310 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 06:51:41.712788  178310 ssh_runner.go:195] Run: openssl version
	I1228 06:51:41.719141  178310 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/90762.pem
	I1228 06:51:41.729689  178310 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/90762.pem /etc/ssl/certs/90762.pem
	I1228 06:51:41.745126  178310 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/90762.pem
	I1228 06:51:41.749884  178310 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:31 /usr/share/ca-certificates/90762.pem
	I1228 06:51:41.749942  178310 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/90762.pem
	I1228 06:51:41.796606  178310 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 06:51:41.805585  178310 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/90762.pem /etc/ssl/certs/3ec20f2e.0
	I1228 06:51:41.814457  178310 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:51:41.822740  178310 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 06:51:41.831367  178310 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:51:41.836234  178310 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:28 /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:51:41.836279  178310 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:51:41.873977  178310 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 06:51:41.882957  178310 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1228 06:51:41.892622  178310 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9076.pem
	I1228 06:51:41.901739  178310 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9076.pem /etc/ssl/certs/9076.pem
	I1228 06:51:41.912416  178310 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9076.pem
	I1228 06:51:41.918283  178310 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:31 /usr/share/ca-certificates/9076.pem
	I1228 06:51:41.918360  178310 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9076.pem
	I1228 06:51:41.961678  178310 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 06:51:41.971966  178310 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9076.pem /etc/ssl/certs/51391683.0
	I1228 06:51:41.979582  178310 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 06:51:41.983626  178310 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1228 06:51:41.983675  178310 kubeadm.go:401] StartCluster: {Name:cert-expiration-623987 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:cert-expiration-623987 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:51:41.983784  178310 ssh_runner.go:195] Run: sudo crio config
	I1228 06:51:42.047826  178310 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	W1228 06:51:42.059466  178310 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:51:42Z" level=error msg="open /run/runc: no such file or directory"
	I1228 06:51:42.059553  178310 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 06:51:42.068913  178310 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1228 06:51:42.076872  178310 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1228 06:51:42.076924  178310 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1228 06:51:42.085118  178310 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1228 06:51:42.085125  178310 kubeadm.go:158] found existing configuration files:
	
	I1228 06:51:42.085186  178310 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1228 06:51:42.093211  178310 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1228 06:51:42.093249  178310 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1228 06:51:42.102325  178310 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1228 06:51:42.113603  178310 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1228 06:51:42.113655  178310 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1228 06:51:42.123890  178310 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1228 06:51:42.132441  178310 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1228 06:51:42.132490  178310 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1228 06:51:42.140782  178310 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1228 06:51:42.150234  178310 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1228 06:51:42.150282  178310 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1228 06:51:42.158217  178310 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1228 06:51:42.201961  178310 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1228 06:51:42.202003  178310 kubeadm.go:319] [preflight] Running pre-flight checks
	I1228 06:51:42.274217  178310 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1228 06:51:42.274281  178310 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1228 06:51:42.274334  178310 kubeadm.go:319] OS: Linux
	I1228 06:51:42.274384  178310 kubeadm.go:319] CGROUPS_CPU: enabled
	I1228 06:51:42.274462  178310 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1228 06:51:42.274527  178310 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1228 06:51:42.274591  178310 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1228 06:51:42.274649  178310 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1228 06:51:42.274705  178310 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1228 06:51:42.274765  178310 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1228 06:51:42.274821  178310 kubeadm.go:319] CGROUPS_IO: enabled
	I1228 06:51:42.346448  178310 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1228 06:51:42.346583  178310 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1228 06:51:42.346701  178310 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1228 06:51:42.355062  178310 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1228 06:51:42.357012  178310 out.go:252]   - Generating certificates and keys ...
	I1228 06:51:42.357143  178310 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1228 06:51:42.357255  178310 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1228 06:51:42.426357  178310 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1228 06:51:42.448989  178310 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1228 06:51:42.776343  178310 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1228 06:51:42.930054  178310 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1228 06:51:43.022644  178310 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1228 06:51:43.022791  178310 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-623987 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1228 06:51:43.254082  178310 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1228 06:51:43.254232  178310 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-623987 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1228 06:51:43.546378  178310 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1228 06:51:43.563371  178310 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1228 06:51:43.651118  178310 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1228 06:51:43.651222  178310 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1228 06:51:43.735085  178310 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1228 06:51:43.842000  178310 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1228 06:51:43.938294  178310 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1228 06:51:43.965957  178310 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1228 06:51:44.032210  178310 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1228 06:51:44.081628  178310 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1228 06:51:44.140211  178310 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> CRI-O <==
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.28693154Z" level=info msg="RDT not available in the host system"
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.28694133Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.287907711Z" level=info msg="Conmon does support the --sync option"
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.287938144Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.28795653Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.28886648Z" level=info msg="Conmon does support the --sync option"
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.288883258Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.29444553Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.294472241Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.295005525Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n        container_create_timeout = 240\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n        container_create_timeout = 240\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"en
forcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [cri
o.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.295423948Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.295477881Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.363338949Z" level=info msg="Got pod network &{Name:coredns-7d764666f9-b6b9t Namespace:kube-system ID:6c515a2cb4c0a2dd806a680a1b572df17142f6ba7a40f2921d5540ce900754ed UID:8cc7b946-c84e-43bb-aa1d-48fb4dfd9862 NetNS:/var/run/netns/be1c5c5b-8286-496c-afcb-5dd3a51c237c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00059a2a0}] Aliases:map[]}"
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.363570357Z" level=info msg="Checking pod kube-system_coredns-7d764666f9-b6b9t for CNI network kindnet (type=ptp)"
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.364421106Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.364447895Z" level=info msg="Starting seccomp notifier watcher"
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.364504896Z" level=info msg="Create NRI interface"
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.364615432Z" level=info msg="built-in NRI default validator is disabled"
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.364625666Z" level=info msg="runtime interface created"
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.364641822Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.3646497Z" level=info msg="runtime interface starting up..."
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.364657308Z" level=info msg="starting plugins..."
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.364671214Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 28 06:51:36 pause-407564 crio[2205]: time="2025-12-28T06:51:36.364978534Z" level=info msg="No systemd watchdog enabled"
	Dec 28 06:51:36 pause-407564 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	bd2e56774124a       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                     16 seconds ago      Running             coredns                   0                   6c515a2cb4c0a       coredns-7d764666f9-b6b9t               kube-system
	f593198b473cd       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27   27 seconds ago      Running             kindnet-cni               0                   d2c29c4ecdcac       kindnet-dmmg7                          kube-system
	4c2b6b9ecfcca       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                     29 seconds ago      Running             kube-proxy                0                   6a8ceda656991       kube-proxy-jpqdf                       kube-system
	bbd4473aa7175       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                     39 seconds ago      Running             etcd                      0                   9dfc0d8546269       etcd-pause-407564                      kube-system
	1f5dd65dc369c       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                     39 seconds ago      Running             kube-scheduler            0                   c483659cb15a6       kube-scheduler-pause-407564            kube-system
	d0301ed9f8af7       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                     39 seconds ago      Running             kube-controller-manager   0                   2c9e3a70fd2bf       kube-controller-manager-pause-407564   kube-system
	2408c603d45ac       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                     39 seconds ago      Running             kube-apiserver            0                   f0f02c6030ed4       kube-apiserver-pause-407564            kube-system
	
	
	==> describe nodes <==
	Name:               pause-407564
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-407564
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba
	                    minikube.k8s.io/name=pause-407564
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_28T06_51_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Dec 2025 06:51:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-407564
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 28 Dec 2025 06:51:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 28 Dec 2025 06:51:28 +0000   Sun, 28 Dec 2025 06:51:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 28 Dec 2025 06:51:28 +0000   Sun, 28 Dec 2025 06:51:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 28 Dec 2025 06:51:28 +0000   Sun, 28 Dec 2025 06:51:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 28 Dec 2025 06:51:28 +0000   Sun, 28 Dec 2025 06:51:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-407564
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 493159aea3d8b8768b108b926950835d
	  System UUID:                b271aa30-0f36-44e4-9d82-0d494fbd379c
	  Boot ID:                    e7a1d175-ccf2-4135-b9c7-3a9f70f4c4af
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7d764666f9-b6b9t                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     29s
	  kube-system                 etcd-pause-407564                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         35s
	  kube-system                 kindnet-dmmg7                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-pause-407564             250m (3%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-pause-407564    200m (2%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-proxy-jpqdf                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-pause-407564             100m (1%)     0 (0%)      0 (0%)           0 (0%)         35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  30s   node-controller  Node pause-407564 event: Registered Node pause-407564 in Controller
	
	
	==> dmesg <==
	[Dec28 06:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001811] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000998] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.386099] i8042: Warning: Keylock active
	[  +0.010472] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.485785] block sda: the capability attribute has been deprecated.
	[  +0.082391] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024584] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.071522] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 06:51:45 up 34 min,  0 user,  load average: 3.58, 1.90, 1.21
	Linux pause-407564 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 28 06:51:29 pause-407564 kubelet[1291]: I1228 06:51:29.003369    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zds24\" (UniqueName: \"kubernetes.io/projected/8cc7b946-c84e-43bb-aa1d-48fb4dfd9862-kube-api-access-zds24\") pod \"coredns-7d764666f9-b6b9t\" (UID: \"8cc7b946-c84e-43bb-aa1d-48fb4dfd9862\") " pod="kube-system/coredns-7d764666f9-b6b9t"
	Dec 28 06:51:29 pause-407564 kubelet[1291]: E1228 06:51:29.539793    1291 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-b6b9t" containerName="coredns"
	Dec 28 06:51:29 pause-407564 kubelet[1291]: I1228 06:51:29.570823    1291 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-b6b9t" podStartSLOduration=13.570801909 podStartE2EDuration="13.570801909s" podCreationTimestamp="2025-12-28 06:51:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-28 06:51:29.556537988 +0000 UTC m=+19.209608579" watchObservedRunningTime="2025-12-28 06:51:29.570801909 +0000 UTC m=+19.223872503"
	Dec 28 06:51:30 pause-407564 kubelet[1291]: E1228 06:51:30.542103    1291 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-b6b9t" containerName="coredns"
	Dec 28 06:51:30 pause-407564 kubelet[1291]: E1228 06:51:30.646851    1291 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-407564" containerName="kube-scheduler"
	Dec 28 06:51:31 pause-407564 kubelet[1291]: E1228 06:51:31.543897    1291 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-b6b9t" containerName="coredns"
	Dec 28 06:51:33 pause-407564 kubelet[1291]: W1228 06:51:33.546489    1291 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 28 06:51:33 pause-407564 kubelet[1291]: E1228 06:51:33.547202    1291 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Dec 28 06:51:33 pause-407564 kubelet[1291]: E1228 06:51:33.547277    1291 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 28 06:51:33 pause-407564 kubelet[1291]: E1228 06:51:33.547294    1291 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 28 06:51:33 pause-407564 kubelet[1291]: W1228 06:51:33.647597    1291 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 28 06:51:33 pause-407564 kubelet[1291]: W1228 06:51:33.776322    1291 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 28 06:51:34 pause-407564 kubelet[1291]: W1228 06:51:34.004252    1291 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 28 06:51:34 pause-407564 kubelet[1291]: W1228 06:51:34.376293    1291 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 28 06:51:34 pause-407564 kubelet[1291]: E1228 06:51:34.483417    1291 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Dec 28 06:51:34 pause-407564 kubelet[1291]: E1228 06:51:34.483556    1291 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 28 06:51:34 pause-407564 kubelet[1291]: E1228 06:51:34.483585    1291 kubelet_pods.go:1263] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 28 06:51:34 pause-407564 kubelet[1291]: E1228 06:51:34.483604    1291 kubelet.go:2687] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 28 06:51:34 pause-407564 kubelet[1291]: E1228 06:51:34.548254    1291 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Dec 28 06:51:34 pause-407564 kubelet[1291]: E1228 06:51:34.548325    1291 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 28 06:51:34 pause-407564 kubelet[1291]: E1228 06:51:34.548344    1291 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 28 06:51:39 pause-407564 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 28 06:51:39 pause-407564 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 28 06:51:39 pause-407564 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 06:51:39 pause-407564 systemd[1]: kubelet.service: Consumed 1.307s CPU time.
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1228 06:51:44.983208  183954 logs.go:279] Failed to list containers for "kube-apiserver": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:51:44Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:51:45.044463  183954 logs.go:279] Failed to list containers for "etcd": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:51:45Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:51:45.107262  183954 logs.go:279] Failed to list containers for "coredns": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:51:45Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:51:45.169531  183954 logs.go:279] Failed to list containers for "kube-scheduler": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:51:45Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:51:45.230723  183954 logs.go:279] Failed to list containers for "kube-proxy": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:51:45Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:51:45.291425  183954 logs.go:279] Failed to list containers for "kube-controller-manager": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:51:45Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:51:45.356177  183954 logs.go:279] Failed to list containers for "kindnet": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:51:45Z" level=error msg="open /run/runc: no such file or directory"

                                                
                                                
** /stderr **
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-407564 -n pause-407564
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-407564 -n pause-407564: exit status 2 (374.547581ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-407564 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-694122 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-694122 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (256.433115ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:55:25Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-694122 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-694122 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-694122 describe deploy/metrics-server -n kube-system: exit status 1 (61.495071ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-694122 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-694122
helpers_test.go:244: (dbg) docker inspect old-k8s-version-694122:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0dd1cc4ae5d6c069007f47d3844c99e6fd488856031b6098669f2a2d9266b8e4",
	        "Created": "2025-12-28T06:54:32.483449473Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 227078,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-28T06:54:32.522495403Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8b8cccb9afb2a57c3d011fcf33e0403b1551aa7036e30b12a395646869801935",
	        "ResolvConfPath": "/var/lib/docker/containers/0dd1cc4ae5d6c069007f47d3844c99e6fd488856031b6098669f2a2d9266b8e4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0dd1cc4ae5d6c069007f47d3844c99e6fd488856031b6098669f2a2d9266b8e4/hostname",
	        "HostsPath": "/var/lib/docker/containers/0dd1cc4ae5d6c069007f47d3844c99e6fd488856031b6098669f2a2d9266b8e4/hosts",
	        "LogPath": "/var/lib/docker/containers/0dd1cc4ae5d6c069007f47d3844c99e6fd488856031b6098669f2a2d9266b8e4/0dd1cc4ae5d6c069007f47d3844c99e6fd488856031b6098669f2a2d9266b8e4-json.log",
	        "Name": "/old-k8s-version-694122",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-694122:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-694122",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0dd1cc4ae5d6c069007f47d3844c99e6fd488856031b6098669f2a2d9266b8e4",
	                "LowerDir": "/var/lib/docker/overlay2/0e198016d10833ae2b69d72eb0480c9e3ae293195212da3a517ed434306dae9b-init/diff:/var/lib/docker/overlay2/69e554713d6cc3cb33e7ea5f93430536a8ca0db38320574d3719c26f00b2f62c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0e198016d10833ae2b69d72eb0480c9e3ae293195212da3a517ed434306dae9b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0e198016d10833ae2b69d72eb0480c9e3ae293195212da3a517ed434306dae9b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0e198016d10833ae2b69d72eb0480c9e3ae293195212da3a517ed434306dae9b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-694122",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-694122/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-694122",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-694122",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-694122",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "6b920edfe7b3155cb6951e4578a92579198ab24aec87ae51d4c7ee179dd83338",
	            "SandboxKey": "/var/run/docker/netns/6b920edfe7b3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33048"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33049"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33052"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33050"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33051"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-694122": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "910bcfa8529441ad2bfa62f448459947be2ed515eaa365c95b9fc10d53f59423",
	                    "EndpointID": "6ba7d229c6b4ce3e2983b1bf5467cb4912c276a710c768ba48cb3ca4ac12e644",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "6e:5f:3a:b5:b1:c0",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-694122",
	                        "0dd1cc4ae5d6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-694122 -n old-k8s-version-694122
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-694122 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-694122 logs -n 25: (1.087594904s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-610916 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-610916             │ jenkins │ v1.37.0 │ 28 Dec 25 06:52 UTC │                     │
	│ ssh     │ -p cilium-610916 sudo crio config                                                                                                                                                                                                             │ cilium-610916             │ jenkins │ v1.37.0 │ 28 Dec 25 06:52 UTC │                     │
	│ delete  │ -p cilium-610916                                                                                                                                                                                                                              │ cilium-610916             │ jenkins │ v1.37.0 │ 28 Dec 25 06:52 UTC │ 28 Dec 25 06:52 UTC │
	│ start   │ -p missing-upgrade-937201 --memory=3072 --driver=docker  --container-runtime=crio                                                                                                                                                             │ missing-upgrade-937201    │ jenkins │ v1.35.0 │ 28 Dec 25 06:52 UTC │ 28 Dec 25 06:53 UTC │
	│ stop    │ -p NoKubernetes-606662                                                                                                                                                                                                                        │ NoKubernetes-606662       │ jenkins │ v1.37.0 │ 28 Dec 25 06:52 UTC │ 28 Dec 25 06:52 UTC │
	│ start   │ -p NoKubernetes-606662 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-606662       │ jenkins │ v1.37.0 │ 28 Dec 25 06:52 UTC │ 28 Dec 25 06:53 UTC │
	│ ssh     │ -p NoKubernetes-606662 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-606662       │ jenkins │ v1.37.0 │ 28 Dec 25 06:53 UTC │                     │
	│ delete  │ -p NoKubernetes-606662                                                                                                                                                                                                                        │ NoKubernetes-606662       │ jenkins │ v1.37.0 │ 28 Dec 25 06:53 UTC │ 28 Dec 25 06:53 UTC │
	│ start   │ -p kubernetes-upgrade-450365 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-450365 │ jenkins │ v1.37.0 │ 28 Dec 25 06:53 UTC │ 28 Dec 25 06:53 UTC │
	│ start   │ -p missing-upgrade-937201 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ missing-upgrade-937201    │ jenkins │ v1.37.0 │ 28 Dec 25 06:53 UTC │ 28 Dec 25 06:53 UTC │
	│ stop    │ -p kubernetes-upgrade-450365 --alsologtostderr                                                                                                                                                                                                │ kubernetes-upgrade-450365 │ jenkins │ v1.37.0 │ 28 Dec 25 06:53 UTC │ 28 Dec 25 06:53 UTC │
	│ start   │ -p kubernetes-upgrade-450365 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-450365 │ jenkins │ v1.37.0 │ 28 Dec 25 06:53 UTC │ 28 Dec 25 06:54 UTC │
	│ delete  │ -p missing-upgrade-937201                                                                                                                                                                                                                     │ missing-upgrade-937201    │ jenkins │ v1.37.0 │ 28 Dec 25 06:53 UTC │ 28 Dec 25 06:53 UTC │
	│ start   │ -p test-preload-785573 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio                                                                                                                  │ test-preload-785573       │ jenkins │ v1.37.0 │ 28 Dec 25 06:53 UTC │ 28 Dec 25 06:54 UTC │
	│ start   │ -p kubernetes-upgrade-450365 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-450365 │ jenkins │ v1.37.0 │ 28 Dec 25 06:54 UTC │                     │
	│ start   │ -p kubernetes-upgrade-450365 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-450365 │ jenkins │ v1.37.0 │ 28 Dec 25 06:54 UTC │ 28 Dec 25 06:54 UTC │
	│ delete  │ -p kubernetes-upgrade-450365                                                                                                                                                                                                                  │ kubernetes-upgrade-450365 │ jenkins │ v1.37.0 │ 28 Dec 25 06:54 UTC │ 28 Dec 25 06:54 UTC │
	│ start   │ -p old-k8s-version-694122 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-694122    │ jenkins │ v1.37.0 │ 28 Dec 25 06:54 UTC │ 28 Dec 25 06:55 UTC │
	│ image   │ test-preload-785573 image pull ghcr.io/medyagh/image-mirrors/busybox:latest                                                                                                                                                                   │ test-preload-785573       │ jenkins │ v1.37.0 │ 28 Dec 25 06:54 UTC │ 28 Dec 25 06:54 UTC │
	│ stop    │ -p test-preload-785573                                                                                                                                                                                                                        │ test-preload-785573       │ jenkins │ v1.37.0 │ 28 Dec 25 06:54 UTC │ 28 Dec 25 06:54 UTC │
	│ start   │ -p cert-expiration-623987 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-623987    │ jenkins │ v1.37.0 │ 28 Dec 25 06:54 UTC │ 28 Dec 25 06:54 UTC │
	│ start   │ -p test-preload-785573 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio                                                                                                                            │ test-preload-785573       │ jenkins │ v1.37.0 │ 28 Dec 25 06:54 UTC │                     │
	│ delete  │ -p cert-expiration-623987                                                                                                                                                                                                                     │ cert-expiration-623987    │ jenkins │ v1.37.0 │ 28 Dec 25 06:54 UTC │ 28 Dec 25 06:54 UTC │
	│ start   │ -p no-preload-950460 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-950460         │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-694122 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-694122    │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 06:55:00
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 06:55:00.044983  233405 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:55:00.045120  233405 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:55:00.045130  233405 out.go:374] Setting ErrFile to fd 2...
	I1228 06:55:00.045137  233405 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:55:00.045463  233405 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:55:00.046088  233405 out.go:368] Setting JSON to false
	I1228 06:55:00.047538  233405 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2252,"bootTime":1766902648,"procs":289,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1228 06:55:00.047605  233405 start.go:143] virtualization: kvm guest
	I1228 06:55:00.050932  233405 out.go:179] * [no-preload-950460] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1228 06:55:00.052335  233405 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 06:55:00.052365  233405 notify.go:221] Checking for updates...
	I1228 06:55:00.054865  233405 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 06:55:00.056109  233405 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:55:00.058122  233405 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-5550/.minikube
	I1228 06:55:00.059351  233405 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1228 06:55:00.061159  233405 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 06:55:00.062877  233405 config.go:182] Loaded profile config "old-k8s-version-694122": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1228 06:55:00.062987  233405 config.go:182] Loaded profile config "stopped-upgrade-416029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1228 06:55:00.063124  233405 config.go:182] Loaded profile config "test-preload-785573": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:55:00.063223  233405 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 06:55:00.088810  233405 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1228 06:55:00.088939  233405 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:55:00.148132  233405 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-28 06:55:00.138180109 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:55:00.148267  233405 docker.go:319] overlay module found
	I1228 06:55:00.150051  233405 out.go:179] * Using the docker driver based on user configuration
	I1228 06:55:00.151171  233405 start.go:309] selected driver: docker
	I1228 06:55:00.151186  233405 start.go:928] validating driver "docker" against <nil>
	I1228 06:55:00.151200  233405 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 06:55:00.151868  233405 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:55:00.207640  233405 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-28 06:55:00.197097346 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:55:00.207834  233405 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1228 06:55:00.208141  233405 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 06:55:00.210021  233405 out.go:179] * Using Docker driver with root privileges
	I1228 06:55:00.211134  233405 cni.go:84] Creating CNI manager for ""
	I1228 06:55:00.211204  233405 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1228 06:55:00.211219  233405 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1228 06:55:00.211297  233405 start.go:353] cluster config:
	{Name:no-preload-950460 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-950460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:55:00.212577  233405 out.go:179] * Starting "no-preload-950460" primary control-plane node in "no-preload-950460" cluster
	I1228 06:55:00.213584  233405 cache.go:134] Beginning downloading kic base image for docker with crio
	I1228 06:55:00.214623  233405 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 06:55:00.215881  233405 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1228 06:55:00.215934  233405 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 06:55:00.215994  233405 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/config.json ...
	I1228 06:55:00.216045  233405 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/config.json: {Name:mk0dc9ddf9efed80009273d08c1364a933f98315 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:55:00.216170  233405 cache.go:107] acquiring lock: {Name:mkd9176dc8bfe34090aff279f6f101ea6f0af9cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:55:00.216221  233405 cache.go:107] acquiring lock: {Name:mk7d35a6d2b389149dcbeab5c7c2ffb31f57d65c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:55:00.216259  233405 cache.go:115] /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1228 06:55:00.216238  233405 cache.go:107] acquiring lock: {Name:mke47ac9c7c044600bef8f6b93ef0e26dc8302f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:55:00.216277  233405 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 133.882µs
	I1228 06:55:00.216291  233405 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1228 06:55:00.216296  233405 cache.go:115] /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0 exists
	I1228 06:55:00.216311  233405 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0" -> "/home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0" took 107.904µs
	I1228 06:55:00.216320  233405 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0 -> /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0 succeeded
	I1228 06:55:00.216290  233405 cache.go:107] acquiring lock: {Name:mke2c1949855d4a55e5668b0d2ae93b37c482c30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:55:00.216317  233405 cache.go:107] acquiring lock: {Name:mk532de4689e044277857a73866e5969a2e4fbc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:55:00.216324  233405 cache.go:115] /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 exists
	I1228 06:55:00.216336  233405 cache.go:107] acquiring lock: {Name:mk4a1a601fb4bce5015f4152fc8c90f967d969a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:55:00.216355  233405 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0" took 120.248µs
	I1228 06:55:00.216365  233405 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1228 06:55:00.216346  233405 cache.go:107] acquiring lock: {Name:mk242447cc3bf85a80c449b21152ddfbb942621c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:55:00.216382  233405 cache.go:115] /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0 exists
	I1228 06:55:00.216389  233405 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0" -> "/home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0" took 76.899µs
	I1228 06:55:00.216389  233405 cache.go:115] /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0 exists
	I1228 06:55:00.216401  233405 cache.go:115] /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1228 06:55:00.216410  233405 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0" -> "/home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0" took 166.218µs
	I1228 06:55:00.216411  233405 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 77.575µs
	I1228 06:55:00.216420  233405 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0 -> /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0 succeeded
	I1228 06:55:00.216423  233405 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1228 06:55:00.216398  233405 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0 -> /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0 succeeded
	I1228 06:55:00.216448  233405 cache.go:115] /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0 exists
	I1228 06:55:00.216458  233405 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0" -> "/home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0" took 157.126µs
	I1228 06:55:00.216474  233405 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0 -> /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0 succeeded
	I1228 06:55:00.216495  233405 cache.go:107] acquiring lock: {Name:mk9e59e568752d1ca479b7f88a0993095cc4ab42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:55:00.216699  233405 cache.go:115] /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1228 06:55:00.216719  233405 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 280.919µs
	I1228 06:55:00.216736  233405 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1228 06:55:00.216748  233405 cache.go:87] Successfully saved all images to host disk.
	I1228 06:55:00.238349  233405 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 06:55:00.238366  233405 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
	I1228 06:55:00.238383  233405 cache.go:243] Successfully downloaded all kic artifacts
	I1228 06:55:00.238420  233405 start.go:360] acquireMachinesLock for no-preload-950460: {Name:mk62d7b73784bafca52412532a69147c30805a01 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:55:00.238538  233405 start.go:364] duration metric: took 87.21µs to acquireMachinesLock for "no-preload-950460"
	I1228 06:55:00.238565  233405 start.go:93] Provisioning new machine with config: &{Name:no-preload-950460 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-950460 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1228 06:55:00.238635  233405 start.go:125] createHost starting for "" (driver="docker")
	I1228 06:54:55.709220  231701 out.go:252] * Restarting existing docker container for "test-preload-785573" ...
	I1228 06:54:55.709282  231701 cli_runner.go:164] Run: docker start test-preload-785573
	I1228 06:54:55.966629  231701 cli_runner.go:164] Run: docker container inspect test-preload-785573 --format={{.State.Status}}
	I1228 06:54:55.988619  231701 kic.go:430] container "test-preload-785573" state is running.
	I1228 06:54:55.989176  231701 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-785573
	I1228 06:54:56.011349  231701 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/test-preload-785573/config.json ...
	I1228 06:54:56.011822  231701 machine.go:94] provisionDockerMachine start ...
	I1228 06:54:56.011967  231701 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-785573
	I1228 06:54:56.033838  231701 main.go:144] libmachine: Using SSH client type: native
	I1228 06:54:56.034146  231701 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1228 06:54:56.034164  231701 main.go:144] libmachine: About to run SSH command:
	hostname
	I1228 06:54:56.034770  231701 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42380->127.0.0.1:33053: read: connection reset by peer
	I1228 06:54:59.163046  231701 main.go:144] libmachine: SSH cmd err, output: <nil>: test-preload-785573
	
	I1228 06:54:59.163077  231701 ubuntu.go:182] provisioning hostname "test-preload-785573"
	I1228 06:54:59.163138  231701 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-785573
	I1228 06:54:59.181016  231701 main.go:144] libmachine: Using SSH client type: native
	I1228 06:54:59.181255  231701 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1228 06:54:59.181269  231701 main.go:144] libmachine: About to run SSH command:
	sudo hostname test-preload-785573 && echo "test-preload-785573" | sudo tee /etc/hostname
	I1228 06:54:59.312850  231701 main.go:144] libmachine: SSH cmd err, output: <nil>: test-preload-785573
	
	I1228 06:54:59.312954  231701 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-785573
	I1228 06:54:59.331868  231701 main.go:144] libmachine: Using SSH client type: native
	I1228 06:54:59.332145  231701 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1228 06:54:59.332166  231701 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-785573' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-785573/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-785573' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1228 06:54:59.466399  231701 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1228 06:54:59.466427  231701 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22352-5550/.minikube CaCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22352-5550/.minikube}
	I1228 06:54:59.466479  231701 ubuntu.go:190] setting up certificates
	I1228 06:54:59.466491  231701 provision.go:84] configureAuth start
	I1228 06:54:59.466549  231701 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-785573
	I1228 06:54:59.487455  231701 provision.go:143] copyHostCerts
	I1228 06:54:59.487528  231701 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem, removing ...
	I1228 06:54:59.487550  231701 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem
	I1228 06:54:59.487627  231701 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem (1082 bytes)
	I1228 06:54:59.487773  231701 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem, removing ...
	I1228 06:54:59.487786  231701 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem
	I1228 06:54:59.487833  231701 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem (1123 bytes)
	I1228 06:54:59.487929  231701 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem, removing ...
	I1228 06:54:59.487941  231701 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem
	I1228 06:54:59.487985  231701 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem (1679 bytes)
	I1228 06:54:59.488089  231701 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem org=jenkins.test-preload-785573 san=[127.0.0.1 192.168.76.2 localhost minikube test-preload-785573]
	I1228 06:54:59.558007  231701 provision.go:177] copyRemoteCerts
	I1228 06:54:59.558098  231701 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1228 06:54:59.558140  231701 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-785573
	I1228 06:54:59.578240  231701 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/test-preload-785573/id_rsa Username:docker}
	I1228 06:54:59.672052  231701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1228 06:54:59.690213  231701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1228 06:54:59.708647  231701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1228 06:54:59.726296  231701 provision.go:87] duration metric: took 259.783449ms to configureAuth
	I1228 06:54:59.726329  231701 ubuntu.go:206] setting minikube options for container-runtime
	I1228 06:54:59.726492  231701 config.go:182] Loaded profile config "test-preload-785573": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:54:59.726586  231701 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-785573
	I1228 06:54:59.746533  231701 main.go:144] libmachine: Using SSH client type: native
	I1228 06:54:59.746854  231701 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1228 06:54:59.746884  231701 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1228 06:55:00.098940  231701 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1228 06:55:00.098974  231701 machine.go:97] duration metric: took 4.087084626s to provisionDockerMachine
	I1228 06:55:00.098989  231701 start.go:293] postStartSetup for "test-preload-785573" (driver="docker")
	I1228 06:55:00.099003  231701 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1228 06:55:00.099088  231701 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1228 06:55:00.099154  231701 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-785573
	I1228 06:55:00.123537  231701 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/test-preload-785573/id_rsa Username:docker}
	I1228 06:55:00.216859  231701 ssh_runner.go:195] Run: cat /etc/os-release
	I1228 06:55:00.220584  231701 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1228 06:55:00.220614  231701 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1228 06:55:00.220626  231701 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-5550/.minikube/addons for local assets ...
	I1228 06:55:00.220677  231701 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-5550/.minikube/files for local assets ...
	I1228 06:55:00.220753  231701 filesync.go:149] local asset: /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem -> 90762.pem in /etc/ssl/certs
	I1228 06:55:00.220848  231701 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1228 06:55:00.228692  231701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem --> /etc/ssl/certs/90762.pem (1708 bytes)
	I1228 06:55:00.247220  231701 start.go:296] duration metric: took 148.219249ms for postStartSetup
	I1228 06:55:00.247278  231701 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 06:55:00.247309  231701 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-785573
	I1228 06:55:00.266705  231701 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/test-preload-785573/id_rsa Username:docker}
	I1228 06:55:00.358547  231701 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1228 06:55:00.363551  231701 fix.go:56] duration metric: took 4.673554872s for fixHost
	I1228 06:55:00.363577  231701 start.go:83] releasing machines lock for "test-preload-785573", held for 4.673602151s
	I1228 06:55:00.363683  231701 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-785573
	I1228 06:55:00.383089  231701 ssh_runner.go:195] Run: cat /version.json
	I1228 06:55:00.383125  231701 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1228 06:55:00.383149  231701 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-785573
	I1228 06:55:00.383215  231701 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-785573
	I1228 06:55:00.403485  231701 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/test-preload-785573/id_rsa Username:docker}
	I1228 06:55:00.404141  231701 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/test-preload-785573/id_rsa Username:docker}
	I1228 06:54:56.478436  226053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:54:56.978597  226053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:54:57.478169  226053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:54:57.978227  226053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:54:58.478013  226053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:54:58.977774  226053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:54:59.477817  226053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:54:59.978254  226053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:55:00.478113  226053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:55:00.978205  226053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:55:01.478235  226053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:55:01.604839  226053 kubeadm.go:1114] duration metric: took 13.206168238s to wait for elevateKubeSystemPrivileges
	I1228 06:55:01.604917  226053 kubeadm.go:403] duration metric: took 22.199859513s to StartCluster
	I1228 06:55:01.604948  226053 settings.go:142] acquiring lock: {Name:mk84c1d4c127eaf11c7cc5cc16de86f86f2dcbe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:55:01.605075  226053 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:55:01.606163  226053 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/kubeconfig: {Name:mke0b45a06b272ee5aa493b62cdbe6f2a53c0aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:55:01.606503  226053 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1228 06:55:01.606536  226053 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1228 06:55:01.606748  226053 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1228 06:55:01.606829  226053 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-694122"
	I1228 06:55:01.606847  226053 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-694122"
	I1228 06:55:01.606874  226053 host.go:66] Checking if "old-k8s-version-694122" exists ...
	I1228 06:55:01.606886  226053 config.go:182] Loaded profile config "old-k8s-version-694122": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1228 06:55:01.606897  226053 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-694122"
	I1228 06:55:01.607221  226053 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-694122"
	I1228 06:55:01.607503  226053 cli_runner.go:164] Run: docker container inspect old-k8s-version-694122 --format={{.State.Status}}
	I1228 06:55:01.607797  226053 cli_runner.go:164] Run: docker container inspect old-k8s-version-694122 --format={{.State.Status}}
	I1228 06:55:01.608145  226053 out.go:179] * Verifying Kubernetes components...
	I1228 06:54:57.153271  174872 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1228 06:54:57.153685  174872 api_server.go:315] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1228 06:54:57.153775  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:54:57.206758  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:54:57.227591  174872 cri.go:83] list returned 5 containers
	I1228 06:54:57.227613  174872 logs.go:282] 1 containers: [48f7e5ef8396b645cf2418dce4f1d1cc2491293b37ed5603ea087c1517820125]
	I1228 06:54:57.227659  174872 ssh_runner.go:195] Run: which crictl
	I1228 06:54:57.231981  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:54:57.290446  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:54:57.315383  174872 cri.go:83] list returned 5 containers
	I1228 06:54:57.315413  174872 logs.go:282] 1 containers: [c07140d372bde62d2a98606e4f16ca65bd2107b9d582b44d9c3b9964313ff88e]
	I1228 06:54:57.315482  174872 ssh_runner.go:195] Run: which crictl
	I1228 06:54:57.319641  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:54:57.373427  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:54:57.393619  174872 cri.go:83] list returned 5 containers
	I1228 06:54:57.393643  174872 logs.go:282] 0 containers: []
	W1228 06:54:57.393650  174872 logs.go:284] No container was found matching "coredns"
	I1228 06:54:57.393685  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:54:57.452926  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:54:57.475375  174872 cri.go:83] list returned 5 containers
	I1228 06:54:57.475399  174872 logs.go:282] 2 containers: [1aa14a7d75bca6368dd9bd6a02e87552346d6b20405558ba53c2c1c7531f55f5 e3b673174dc333952a5d15a4c1c13010ebf1520d1a7e491fd37083392b23cb0d]
	I1228 06:54:57.475457  174872 ssh_runner.go:195] Run: which crictl
	I1228 06:54:57.479677  174872 ssh_runner.go:195] Run: which crictl
	I1228 06:54:57.483306  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:54:57.540278  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:54:57.563273  174872 cri.go:83] list returned 5 containers
	I1228 06:54:57.563297  174872 logs.go:282] 0 containers: []
	W1228 06:54:57.563306  174872 logs.go:284] No container was found matching "kube-proxy"
	I1228 06:54:57.563345  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:54:57.614534  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:54:57.635341  174872 cri.go:83] list returned 5 containers
	I1228 06:54:57.635365  174872 logs.go:282] 1 containers: [255765325be21023bb69e4f085b53b94ab8a4ecccaa851e2cbb9c042d052b015]
	I1228 06:54:57.635411  174872 ssh_runner.go:195] Run: which crictl
	I1228 06:54:57.639068  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:54:57.689144  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:54:57.710894  174872 cri.go:83] list returned 5 containers
	I1228 06:54:57.710920  174872 logs.go:282] 0 containers: []
	W1228 06:54:57.710926  174872 logs.go:284] No container was found matching "kindnet"
	I1228 06:54:57.710964  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:54:57.766989  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:54:57.789551  174872 cri.go:83] list returned 5 containers
	I1228 06:54:57.789580  174872 logs.go:282] 0 containers: []
	W1228 06:54:57.789590  174872 logs.go:284] No container was found matching "storage-provisioner"
	I1228 06:54:57.789605  174872 logs.go:123] Gathering logs for describe nodes ...
	I1228 06:54:57.789618  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 06:54:57.856740  174872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 06:54:57.856767  174872 logs.go:123] Gathering logs for etcd [c07140d372bde62d2a98606e4f16ca65bd2107b9d582b44d9c3b9964313ff88e] ...
	I1228 06:54:57.856785  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c07140d372bde62d2a98606e4f16ca65bd2107b9d582b44d9c3b9964313ff88e"
	I1228 06:54:57.893670  174872 logs.go:123] Gathering logs for kube-controller-manager [255765325be21023bb69e4f085b53b94ab8a4ecccaa851e2cbb9c042d052b015] ...
	I1228 06:54:57.893702  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 255765325be21023bb69e4f085b53b94ab8a4ecccaa851e2cbb9c042d052b015"
	I1228 06:54:57.929870  174872 logs.go:123] Gathering logs for CRI-O ...
	I1228 06:54:57.929896  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1228 06:54:57.997119  174872 logs.go:123] Gathering logs for container status ...
	I1228 06:54:57.997166  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 06:54:58.045951  174872 logs.go:123] Gathering logs for dmesg ...
	I1228 06:54:58.045986  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 06:54:58.063205  174872 logs.go:123] Gathering logs for kube-apiserver [48f7e5ef8396b645cf2418dce4f1d1cc2491293b37ed5603ea087c1517820125] ...
	I1228 06:54:58.063236  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48f7e5ef8396b645cf2418dce4f1d1cc2491293b37ed5603ea087c1517820125"
	I1228 06:54:58.100930  174872 logs.go:123] Gathering logs for kube-scheduler [1aa14a7d75bca6368dd9bd6a02e87552346d6b20405558ba53c2c1c7531f55f5] ...
	I1228 06:54:58.100959  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1aa14a7d75bca6368dd9bd6a02e87552346d6b20405558ba53c2c1c7531f55f5"
	I1228 06:54:58.177490  174872 logs.go:123] Gathering logs for kube-scheduler [e3b673174dc333952a5d15a4c1c13010ebf1520d1a7e491fd37083392b23cb0d] ...
	I1228 06:54:58.177521  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3b673174dc333952a5d15a4c1c13010ebf1520d1a7e491fd37083392b23cb0d"
	I1228 06:54:58.213292  174872 logs.go:123] Gathering logs for kubelet ...
	I1228 06:54:58.213318  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 06:55:00.814124  174872 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1228 06:55:00.814547  174872 api_server.go:315] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1228 06:55:00.814644  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:00.884577  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:00.906336  174872 cri.go:83] list returned 5 containers
	I1228 06:55:00.906361  174872 logs.go:282] 1 containers: [48f7e5ef8396b645cf2418dce4f1d1cc2491293b37ed5603ea087c1517820125]
	I1228 06:55:00.906402  174872 ssh_runner.go:195] Run: which crictl
	I1228 06:55:00.910692  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:00.974272  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:00.999326  174872 cri.go:83] list returned 5 containers
	I1228 06:55:00.999357  174872 logs.go:282] 1 containers: [c07140d372bde62d2a98606e4f16ca65bd2107b9d582b44d9c3b9964313ff88e]
	I1228 06:55:00.999415  174872 ssh_runner.go:195] Run: which crictl
	I1228 06:55:01.004227  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:01.083324  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:01.110886  174872 cri.go:83] list returned 5 containers
	I1228 06:55:01.110913  174872 logs.go:282] 0 containers: []
	W1228 06:55:01.110925  174872 logs.go:284] No container was found matching "coredns"
	I1228 06:55:01.110962  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:01.172218  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:01.194914  174872 cri.go:83] list returned 5 containers
	I1228 06:55:01.194946  174872 logs.go:282] 2 containers: [1aa14a7d75bca6368dd9bd6a02e87552346d6b20405558ba53c2c1c7531f55f5 e3b673174dc333952a5d15a4c1c13010ebf1520d1a7e491fd37083392b23cb0d]
	I1228 06:55:01.195003  174872 ssh_runner.go:195] Run: which crictl
	I1228 06:55:01.199963  174872 ssh_runner.go:195] Run: which crictl
	I1228 06:55:01.204285  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:01.270387  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:01.295484  174872 cri.go:83] list returned 5 containers
	I1228 06:55:01.295518  174872 logs.go:282] 0 containers: []
	W1228 06:55:01.295529  174872 logs.go:284] No container was found matching "kube-proxy"
	I1228 06:55:01.295573  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:01.373153  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:01.396629  174872 cri.go:83] list returned 5 containers
	I1228 06:55:01.396693  174872 logs.go:282] 1 containers: [255765325be21023bb69e4f085b53b94ab8a4ecccaa851e2cbb9c042d052b015]
	I1228 06:55:01.396755  174872 ssh_runner.go:195] Run: which crictl
	I1228 06:55:01.400621  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:01.471769  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:01.501050  174872 cri.go:83] list returned 5 containers
	I1228 06:55:01.501081  174872 logs.go:282] 0 containers: []
	W1228 06:55:01.501094  174872 logs.go:284] No container was found matching "kindnet"
	I1228 06:55:01.501199  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:01.590991  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:01.609197  226053 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:55:01.645339  226053 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1228 06:55:00.554804  231701 ssh_runner.go:195] Run: systemctl --version
	I1228 06:55:00.562814  231701 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1228 06:55:00.600464  231701 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1228 06:55:00.605364  231701 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1228 06:55:00.605429  231701 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1228 06:55:00.614154  231701 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1228 06:55:00.614181  231701 start.go:496] detecting cgroup driver to use...
	I1228 06:55:00.614210  231701 detect.go:190] detected "systemd" cgroup driver on host os
	I1228 06:55:00.614253  231701 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1228 06:55:00.629086  231701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1228 06:55:00.643371  231701 docker.go:218] disabling cri-docker service (if available) ...
	I1228 06:55:00.643435  231701 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1228 06:55:00.675175  231701 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1228 06:55:00.693330  231701 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1228 06:55:00.798372  231701 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1228 06:55:00.901598  231701 docker.go:234] disabling docker service ...
	I1228 06:55:00.901670  231701 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1228 06:55:00.921696  231701 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1228 06:55:00.937802  231701 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1228 06:55:01.053710  231701 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1228 06:55:01.156574  231701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1228 06:55:01.173121  231701 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 06:55:01.190325  231701 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1228 06:55:01.190386  231701 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:55:01.201052  231701 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1228 06:55:01.201122  231701 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:55:01.213009  231701 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:55:01.225336  231701 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:55:01.236980  231701 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1228 06:55:01.247257  231701 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:55:01.258134  231701 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:55:01.269725  231701 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:55:01.280515  231701 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1228 06:55:01.289880  231701 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1228 06:55:01.299343  231701 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:55:01.405694  231701 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1228 06:55:01.592728  231701 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1228 06:55:01.592793  231701 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1228 06:55:01.600503  231701 start.go:574] Will wait 60s for crictl version
	I1228 06:55:01.600690  231701 ssh_runner.go:195] Run: which crictl
	I1228 06:55:01.606788  231701 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1228 06:55:01.694809  231701 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1228 06:55:01.694983  231701 ssh_runner.go:195] Run: crio --version
	I1228 06:55:01.756122  231701 ssh_runner.go:195] Run: crio --version
	I1228 06:55:01.812549  231701 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1228 06:55:01.647385  226053 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:55:01.647424  226053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1228 06:55:01.647489  226053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-694122
	I1228 06:55:01.657699  226053 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-694122"
	I1228 06:55:01.657751  226053 host.go:66] Checking if "old-k8s-version-694122" exists ...
	I1228 06:55:01.658447  226053 cli_runner.go:164] Run: docker container inspect old-k8s-version-694122 --format={{.State.Status}}
	I1228 06:55:01.695233  226053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/old-k8s-version-694122/id_rsa Username:docker}
	I1228 06:55:01.707348  226053 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1228 06:55:01.707376  226053 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1228 06:55:01.707449  226053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-694122
	I1228 06:55:01.751538  226053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/old-k8s-version-694122/id_rsa Username:docker}
	I1228 06:55:01.822198  226053 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1228 06:55:01.863167  226053 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:55:01.866523  226053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:55:01.898832  226053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1228 06:55:02.213011  226053 start.go:987] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1228 06:55:02.214022  226053 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-694122" to be "Ready" ...
	I1228 06:55:02.545600  226053 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1228 06:55:01.813938  231701 cli_runner.go:164] Run: docker network inspect test-preload-785573 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 06:55:01.841447  231701 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1228 06:55:01.847093  231701 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 06:55:01.869290  231701 kubeadm.go:884] updating cluster {Name:test-preload-785573 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:test-preload-785573 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 06:55:01.869446  231701 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1228 06:55:01.869496  231701 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 06:55:01.924893  231701 crio.go:631] all images are preloaded for cri-o runtime.
	I1228 06:55:01.924922  231701 crio.go:503] Images already preloaded, skipping extraction
	I1228 06:55:01.924981  231701 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 06:55:01.979692  231701 crio.go:631] all images are preloaded for cri-o runtime.
	I1228 06:55:01.979724  231701 cache_images.go:86] Images are preloaded, skipping loading
	I1228 06:55:01.979734  231701 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I1228 06:55:01.979874  231701 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=test-preload-785573 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:test-preload-785573 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 06:55:01.979959  231701 ssh_runner.go:195] Run: crio config
	I1228 06:55:02.071762  231701 cni.go:84] Creating CNI manager for ""
	I1228 06:55:02.071803  231701 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1228 06:55:02.071822  231701 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1228 06:55:02.071852  231701 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-785573 NodeName:test-preload-785573 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 06:55:02.072014  231701 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-785573"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 06:55:02.072177  231701 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1228 06:55:02.085491  231701 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 06:55:02.085561  231701 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 06:55:02.097836  231701 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1228 06:55:02.117476  231701 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 06:55:02.138442  231701 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1228 06:55:02.162221  231701 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1228 06:55:02.170622  231701 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 06:55:02.187134  231701 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:55:02.334414  231701 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:55:02.376230  231701 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/test-preload-785573 for IP: 192.168.76.2
	I1228 06:55:02.376254  231701 certs.go:195] generating shared ca certs ...
	I1228 06:55:02.376273  231701 certs.go:227] acquiring lock for ca certs: {Name:mk77ee411d20e2d367f536371cb4debf1ce5f664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:55:02.376431  231701 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-5550/.minikube/ca.key
	I1228 06:55:02.376485  231701 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.key
	I1228 06:55:02.376494  231701 certs.go:257] generating profile certs ...
	I1228 06:55:02.376595  231701 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/test-preload-785573/client.key
	I1228 06:55:02.376677  231701 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/test-preload-785573/apiserver.key.ce78f10a
	I1228 06:55:02.376728  231701 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/test-preload-785573/proxy-client.key
	I1228 06:55:02.376876  231701 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076.pem (1338 bytes)
	W1228 06:55:02.376927  231701 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076_empty.pem, impossibly tiny 0 bytes
	I1228 06:55:02.376939  231701 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem (1675 bytes)
	I1228 06:55:02.376975  231701 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem (1082 bytes)
	I1228 06:55:02.377005  231701 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem (1123 bytes)
	I1228 06:55:02.377088  231701 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem (1679 bytes)
	I1228 06:55:02.377153  231701 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem (1708 bytes)
	I1228 06:55:02.377915  231701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 06:55:02.417555  231701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1228 06:55:02.446983  231701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 06:55:02.476327  231701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 06:55:02.511693  231701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/test-preload-785573/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1228 06:55:02.549668  231701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/test-preload-785573/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1228 06:55:02.574547  231701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/test-preload-785573/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 06:55:02.597859  231701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/test-preload-785573/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1228 06:55:02.621982  231701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem --> /usr/share/ca-certificates/90762.pem (1708 bytes)
	I1228 06:55:02.645910  231701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 06:55:02.669113  231701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076.pem --> /usr/share/ca-certificates/9076.pem (1338 bytes)
	I1228 06:55:02.692737  231701 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 06:55:02.713641  231701 ssh_runner.go:195] Run: openssl version
	I1228 06:55:02.722225  231701 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9076.pem
	I1228 06:55:02.733688  231701 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9076.pem /etc/ssl/certs/9076.pem
	I1228 06:55:02.744747  231701 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9076.pem
	I1228 06:55:02.750090  231701 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:31 /usr/share/ca-certificates/9076.pem
	I1228 06:55:02.750148  231701 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9076.pem
	I1228 06:55:02.808390  231701 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 06:55:02.819550  231701 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/90762.pem
	I1228 06:55:02.829848  231701 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/90762.pem /etc/ssl/certs/90762.pem
	I1228 06:55:02.840408  231701 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/90762.pem
	I1228 06:55:02.846683  231701 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:31 /usr/share/ca-certificates/90762.pem
	I1228 06:55:02.846742  231701 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/90762.pem
	I1228 06:55:02.905600  231701 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 06:55:02.917816  231701 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:55:02.928129  231701 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 06:55:02.937697  231701 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:55:02.943226  231701 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:28 /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:55:02.943295  231701 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:55:02.998487  231701 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 06:55:03.011296  231701 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 06:55:03.016076  231701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1228 06:55:03.077903  231701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1228 06:55:03.138358  231701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1228 06:55:03.190951  231701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1228 06:55:03.241739  231701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1228 06:55:03.300619  231701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1228 06:55:03.360652  231701 kubeadm.go:401] StartCluster: {Name:test-preload-785573 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:test-preload-785573 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:55:03.360796  231701 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:03.426942  231701 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	W1228 06:55:03.450948  231701 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:55:03Z" level=error msg="open /run/runc: no such file or directory"
	I1228 06:55:03.451105  231701 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 06:55:03.469125  231701 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1228 06:55:03.469144  231701 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1228 06:55:03.469239  231701 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1228 06:55:03.481223  231701 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1228 06:55:03.482054  231701 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-785573" does not appear in /home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:55:03.482489  231701 kubeconfig.go:62] /home/jenkins/minikube-integration/22352-5550/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-785573" cluster setting kubeconfig missing "test-preload-785573" context setting]
	I1228 06:55:03.483321  231701 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/kubeconfig: {Name:mke0b45a06b272ee5aa493b62cdbe6f2a53c0aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:55:03.484259  231701 kapi.go:59] client config for test-preload-785573: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22352-5550/.minikube/profiles/test-preload-785573/client.crt", KeyFile:"/home/jenkins/minikube-integration/22352-5550/.minikube/profiles/test-preload-785573/client.key", CAFile:"/home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2780200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1228 06:55:03.484801  231701 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I1228 06:55:03.484816  231701 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I1228 06:55:03.484824  231701 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1228 06:55:03.484830  231701 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1228 06:55:03.484837  231701 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1228 06:55:03.484843  231701 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I1228 06:55:03.485944  231701 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1228 06:55:03.497765  231701 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1228 06:55:03.497800  231701 kubeadm.go:602] duration metric: took 28.648786ms to restartPrimaryControlPlane
	I1228 06:55:03.497810  231701 kubeadm.go:403] duration metric: took 137.168288ms to StartCluster
	I1228 06:55:03.497827  231701 settings.go:142] acquiring lock: {Name:mk84c1d4c127eaf11c7cc5cc16de86f86f2dcbe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:55:03.497900  231701 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:55:03.498898  231701 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/kubeconfig: {Name:mke0b45a06b272ee5aa493b62cdbe6f2a53c0aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:55:03.499199  231701 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1228 06:55:03.499440  231701 config.go:182] Loaded profile config "test-preload-785573": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:55:03.499506  231701 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1228 06:55:03.499593  231701 addons.go:70] Setting storage-provisioner=true in profile "test-preload-785573"
	I1228 06:55:03.499607  231701 addons.go:239] Setting addon storage-provisioner=true in "test-preload-785573"
	W1228 06:55:03.499615  231701 addons.go:248] addon storage-provisioner should already be in state true
	I1228 06:55:03.499641  231701 host.go:66] Checking if "test-preload-785573" exists ...
	I1228 06:55:03.500153  231701 cli_runner.go:164] Run: docker container inspect test-preload-785573 --format={{.State.Status}}
	I1228 06:55:03.500295  231701 addons.go:70] Setting default-storageclass=true in profile "test-preload-785573"
	I1228 06:55:03.500331  231701 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-785573"
	I1228 06:55:03.500625  231701 cli_runner.go:164] Run: docker container inspect test-preload-785573 --format={{.State.Status}}
	I1228 06:55:03.501180  231701 out.go:179] * Verifying Kubernetes components...
	I1228 06:55:03.502365  231701 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:55:03.527782  231701 kapi.go:59] client config for test-preload-785573: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22352-5550/.minikube/profiles/test-preload-785573/client.crt", KeyFile:"/home/jenkins/minikube-integration/22352-5550/.minikube/profiles/test-preload-785573/client.key", CAFile:"/home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2780200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1228 06:55:03.528194  231701 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1228 06:55:00.241322  233405 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1228 06:55:00.241616  233405 start.go:159] libmachine.API.Create for "no-preload-950460" (driver="docker")
	I1228 06:55:00.241646  233405 client.go:173] LocalClient.Create starting
	I1228 06:55:00.241690  233405 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem
	I1228 06:55:00.241729  233405 main.go:144] libmachine: Decoding PEM data...
	I1228 06:55:00.241755  233405 main.go:144] libmachine: Parsing certificate...
	I1228 06:55:00.241852  233405 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem
	I1228 06:55:00.241891  233405 main.go:144] libmachine: Decoding PEM data...
	I1228 06:55:00.241904  233405 main.go:144] libmachine: Parsing certificate...
	I1228 06:55:00.242251  233405 cli_runner.go:164] Run: docker network inspect no-preload-950460 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1228 06:55:00.260578  233405 cli_runner.go:211] docker network inspect no-preload-950460 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1228 06:55:00.260652  233405 network_create.go:284] running [docker network inspect no-preload-950460] to gather additional debugging logs...
	I1228 06:55:00.260672  233405 cli_runner.go:164] Run: docker network inspect no-preload-950460
	W1228 06:55:00.277511  233405 cli_runner.go:211] docker network inspect no-preload-950460 returned with exit code 1
	I1228 06:55:00.277538  233405 network_create.go:287] error running [docker network inspect no-preload-950460]: docker network inspect no-preload-950460: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-950460 not found
	I1228 06:55:00.277560  233405 network_create.go:289] output of [docker network inspect no-preload-950460]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-950460 not found
	
	** /stderr **
	I1228 06:55:00.277671  233405 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 06:55:00.295416  233405 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-83d3c063481b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ca:56:51:df:60:88} reservation:<nil>}
	I1228 06:55:00.296072  233405 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-94477def059b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:5a:82:84:46:ba:6c} reservation:<nil>}
	I1228 06:55:00.296705  233405 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-76f4b09d664b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9a:e7:39:af:62:68} reservation:<nil>}
	I1228 06:55:00.297271  233405 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-bb0b674815c9 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ce:bc:f1:54:86:b0} reservation:<nil>}
	I1228 06:55:00.297753  233405 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-910bcfa85294 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:2e:73:05:60:cf:7d} reservation:<nil>}
	I1228 06:55:00.298569  233405 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00207cd50}
	I1228 06:55:00.298596  233405 network_create.go:124] attempt to create docker network no-preload-950460 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1228 06:55:00.298650  233405 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-950460 no-preload-950460
	I1228 06:55:00.352521  233405 network_create.go:108] docker network no-preload-950460 192.168.94.0/24 created
	I1228 06:55:00.352565  233405 kic.go:121] calculated static IP "192.168.94.2" for the "no-preload-950460" container
	I1228 06:55:00.352657  233405 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1228 06:55:00.371580  233405 cli_runner.go:164] Run: docker volume create no-preload-950460 --label name.minikube.sigs.k8s.io=no-preload-950460 --label created_by.minikube.sigs.k8s.io=true
	I1228 06:55:00.391308  233405 oci.go:103] Successfully created a docker volume no-preload-950460
	I1228 06:55:00.391414  233405 cli_runner.go:164] Run: docker run --rm --name no-preload-950460-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-950460 --entrypoint /usr/bin/test -v no-preload-950460:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -d /var/lib
	I1228 06:55:00.798254  233405 oci.go:107] Successfully prepared a docker volume no-preload-950460
	I1228 06:55:00.798336  233405 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	W1228 06:55:00.798426  233405 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1228 06:55:00.798466  233405 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1228 06:55:00.798511  233405 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1228 06:55:00.875750  233405 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-950460 --name no-preload-950460 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-950460 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-950460 --network no-preload-950460 --ip 192.168.94.2 --volume no-preload-950460:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1
	I1228 06:55:01.203942  233405 cli_runner.go:164] Run: docker container inspect no-preload-950460 --format={{.State.Running}}
	I1228 06:55:01.227986  233405 cli_runner.go:164] Run: docker container inspect no-preload-950460 --format={{.State.Status}}
	I1228 06:55:01.250667  233405 cli_runner.go:164] Run: docker exec no-preload-950460 stat /var/lib/dpkg/alternatives/iptables
	I1228 06:55:01.303356  233405 oci.go:144] the created container "no-preload-950460" has a running status.
	I1228 06:55:01.303385  233405 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22352-5550/.minikube/machines/no-preload-950460/id_rsa...
	I1228 06:55:01.532931  233405 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22352-5550/.minikube/machines/no-preload-950460/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1228 06:55:01.572198  233405 cli_runner.go:164] Run: docker container inspect no-preload-950460 --format={{.State.Status}}
	I1228 06:55:01.606707  233405 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1228 06:55:01.606753  233405 kic_runner.go:114] Args: [docker exec --privileged no-preload-950460 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1228 06:55:01.724950  233405 cli_runner.go:164] Run: docker container inspect no-preload-950460 --format={{.State.Status}}
	I1228 06:55:01.761266  233405 machine.go:94] provisionDockerMachine start ...
	I1228 06:55:01.761365  233405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:55:01.790172  233405 main.go:144] libmachine: Using SSH client type: native
	I1228 06:55:01.790495  233405 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1228 06:55:01.790524  233405 main.go:144] libmachine: About to run SSH command:
	hostname
	I1228 06:55:01.955790  233405 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-950460
	
	I1228 06:55:01.955818  233405 ubuntu.go:182] provisioning hostname "no-preload-950460"
	I1228 06:55:01.955952  233405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:55:01.986507  233405 main.go:144] libmachine: Using SSH client type: native
	I1228 06:55:01.986901  233405 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1228 06:55:01.986960  233405 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-950460 && echo "no-preload-950460" | sudo tee /etc/hostname
	I1228 06:55:02.166117  233405 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-950460
	
	I1228 06:55:02.166192  233405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:55:02.197020  233405 main.go:144] libmachine: Using SSH client type: native
	I1228 06:55:02.197405  233405 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1228 06:55:02.197437  233405 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-950460' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-950460/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-950460' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1228 06:55:02.363521  233405 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1228 06:55:02.363551  233405 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22352-5550/.minikube CaCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22352-5550/.minikube}
	I1228 06:55:02.363687  233405 ubuntu.go:190] setting up certificates
	I1228 06:55:02.363703  233405 provision.go:84] configureAuth start
	I1228 06:55:02.363803  233405 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-950460
	I1228 06:55:02.394129  233405 provision.go:143] copyHostCerts
	I1228 06:55:02.394271  233405 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem, removing ...
	I1228 06:55:02.394335  233405 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem
	I1228 06:55:02.394467  233405 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem (1123 bytes)
	I1228 06:55:02.394667  233405 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem, removing ...
	I1228 06:55:02.394722  233405 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem
	I1228 06:55:02.394805  233405 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem (1679 bytes)
	I1228 06:55:02.394989  233405 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem, removing ...
	I1228 06:55:02.395063  233405 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem
	I1228 06:55:02.395168  233405 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem (1082 bytes)
	I1228 06:55:02.395351  233405 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem org=jenkins.no-preload-950460 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-950460]
	I1228 06:55:02.475074  233405 provision.go:177] copyRemoteCerts
	I1228 06:55:02.475199  233405 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1228 06:55:02.475271  233405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:55:02.509429  233405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/no-preload-950460/id_rsa Username:docker}
	I1228 06:55:02.617324  233405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1228 06:55:02.641920  233405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1228 06:55:02.664865  233405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1228 06:55:02.689305  233405 provision.go:87] duration metric: took 325.575973ms to configureAuth
	I1228 06:55:02.689357  233405 ubuntu.go:206] setting minikube options for container-runtime
	I1228 06:55:02.689576  233405 config.go:182] Loaded profile config "no-preload-950460": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:55:02.689719  233405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:55:02.715355  233405 main.go:144] libmachine: Using SSH client type: native
	I1228 06:55:02.715665  233405 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1228 06:55:02.715684  233405 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1228 06:55:03.066667  233405 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1228 06:55:03.066693  233405 machine.go:97] duration metric: took 1.305399665s to provisionDockerMachine
	I1228 06:55:03.066705  233405 client.go:176] duration metric: took 2.825053292s to LocalClient.Create
	I1228 06:55:03.066721  233405 start.go:167] duration metric: took 2.82510557s to libmachine.API.Create "no-preload-950460"
	I1228 06:55:03.066729  233405 start.go:293] postStartSetup for "no-preload-950460" (driver="docker")
	I1228 06:55:03.066742  233405 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1228 06:55:03.068295  233405 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1228 06:55:03.068379  233405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:55:03.095881  233405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/no-preload-950460/id_rsa Username:docker}
	I1228 06:55:03.208417  233405 ssh_runner.go:195] Run: cat /etc/os-release
	I1228 06:55:03.212876  233405 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1228 06:55:03.212905  233405 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1228 06:55:03.212917  233405 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-5550/.minikube/addons for local assets ...
	I1228 06:55:03.212974  233405 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-5550/.minikube/files for local assets ...
	I1228 06:55:03.213130  233405 filesync.go:149] local asset: /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem -> 90762.pem in /etc/ssl/certs
	I1228 06:55:03.213249  233405 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1228 06:55:03.223401  233405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem --> /etc/ssl/certs/90762.pem (1708 bytes)
	I1228 06:55:03.248892  233405 start.go:296] duration metric: took 182.15ms for postStartSetup
	I1228 06:55:03.249380  233405 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-950460
	I1228 06:55:03.275284  233405 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/config.json ...
	I1228 06:55:03.275641  233405 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 06:55:03.275805  233405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:55:03.301135  233405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/no-preload-950460/id_rsa Username:docker}
	I1228 06:55:03.407613  233405 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1228 06:55:03.413453  233405 start.go:128] duration metric: took 3.174804576s to createHost
	I1228 06:55:03.413482  233405 start.go:83] releasing machines lock for "no-preload-950460", held for 3.174929742s
	I1228 06:55:03.413547  233405 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-950460
	I1228 06:55:03.439689  233405 ssh_runner.go:195] Run: cat /version.json
	I1228 06:55:03.439749  233405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:55:03.439750  233405 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1228 06:55:03.439809  233405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:55:03.467874  233405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/no-preload-950460/id_rsa Username:docker}
	I1228 06:55:03.470912  233405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/no-preload-950460/id_rsa Username:docker}
	I1228 06:55:03.666110  233405 ssh_runner.go:195] Run: systemctl --version
	I1228 06:55:03.673754  233405 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1228 06:55:03.725502  233405 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1228 06:55:03.731173  233405 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1228 06:55:03.731261  233405 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1228 06:55:03.763579  233405 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1228 06:55:03.763661  233405 start.go:496] detecting cgroup driver to use...
	I1228 06:55:03.763710  233405 detect.go:190] detected "systemd" cgroup driver on host os
	I1228 06:55:03.763774  233405 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1228 06:55:03.782545  233405 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1228 06:55:03.795611  233405 docker.go:218] disabling cri-docker service (if available) ...
	I1228 06:55:03.795670  233405 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1228 06:55:03.813918  233405 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1228 06:55:03.835600  233405 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1228 06:55:03.929299  233405 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1228 06:55:04.035787  233405 docker.go:234] disabling docker service ...
	I1228 06:55:04.035851  233405 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1228 06:55:04.062781  233405 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1228 06:55:04.076920  233405 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1228 06:55:04.170021  233405 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1228 06:55:04.261175  233405 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1228 06:55:04.273693  233405 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 06:55:04.287588  233405 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1228 06:55:04.287650  233405 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:55:04.297836  233405 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1228 06:55:04.297915  233405 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:55:04.306664  233405 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:55:04.317659  233405 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:55:04.331841  233405 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1228 06:55:04.340418  233405 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:55:04.351080  233405 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:55:04.364147  233405 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:55:04.372471  233405 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1228 06:55:04.379391  233405 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1228 06:55:04.386943  233405 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:55:04.505065  233405 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1228 06:55:04.905797  233405 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1228 06:55:04.905876  233405 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1228 06:55:04.909904  233405 start.go:574] Will wait 60s for crictl version
	I1228 06:55:04.909958  233405 ssh_runner.go:195] Run: which crictl
	I1228 06:55:04.913628  233405 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1228 06:55:04.939048  233405 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1228 06:55:04.939138  233405 ssh_runner.go:195] Run: crio --version
	I1228 06:55:04.972565  233405 ssh_runner.go:195] Run: crio --version
	I1228 06:55:05.003655  233405 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1228 06:55:05.004791  233405 cli_runner.go:164] Run: docker network inspect no-preload-950460 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 06:55:05.023322  233405 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1228 06:55:05.027601  233405 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 06:55:05.037946  233405 kubeadm.go:884] updating cluster {Name:no-preload-950460 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-950460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 06:55:05.038080  233405 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1228 06:55:05.038123  233405 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 06:55:03.528273  231701 addons.go:239] Setting addon default-storageclass=true in "test-preload-785573"
	W1228 06:55:03.528292  231701 addons.go:248] addon default-storageclass should already be in state true
	I1228 06:55:03.528321  231701 host.go:66] Checking if "test-preload-785573" exists ...
	I1228 06:55:03.528799  231701 cli_runner.go:164] Run: docker container inspect test-preload-785573 --format={{.State.Status}}
	I1228 06:55:03.530209  231701 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:55:03.530228  231701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1228 06:55:03.530277  231701 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-785573
	I1228 06:55:03.561475  231701 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/test-preload-785573/id_rsa Username:docker}
	I1228 06:55:03.562427  231701 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1228 06:55:03.562452  231701 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1228 06:55:03.562507  231701 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-785573
	I1228 06:55:03.590135  231701 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/test-preload-785573/id_rsa Username:docker}
	I1228 06:55:03.684285  231701 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:55:03.697379  231701 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:55:03.706927  231701 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1228 06:55:05.082255  231701 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.397939436s)
	I1228 06:55:05.082260  231701 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.384849478s)
	I1228 06:55:05.082295  231701 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.375336549s)
	I1228 06:55:05.082325  231701 node_ready.go:35] waiting up to 6m0s for node "test-preload-785573" to be "Ready" ...
	I1228 06:55:05.091000  231701 node_ready.go:49] node "test-preload-785573" is "Ready"
	I1228 06:55:05.091045  231701 node_ready.go:38] duration metric: took 8.686487ms for node "test-preload-785573" to be "Ready" ...
	I1228 06:55:05.091061  231701 api_server.go:52] waiting for apiserver process to appear ...
	I1228 06:55:05.091114  231701 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 06:55:05.095273  231701 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1228 06:55:05.096494  231701 addons.go:530] duration metric: took 1.596993084s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1228 06:55:05.105882  231701 api_server.go:72] duration metric: took 1.606646547s to wait for apiserver process to appear ...
	I1228 06:55:05.105916  231701 api_server.go:88] waiting for apiserver healthz status ...
	I1228 06:55:05.105934  231701 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1228 06:55:05.110372  231701 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1228 06:55:05.110402  231701 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1228 06:55:02.547226  226053 addons.go:530] duration metric: took 940.480707ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1228 06:55:02.718219  226053 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-694122" context rescaled to 1 replicas
	W1228 06:55:04.217895  226053 node_ready.go:57] node "old-k8s-version-694122" has "Ready":"False" status (will retry)
	I1228 06:55:01.630144  174872 cri.go:83] list returned 5 containers
	I1228 06:55:01.630176  174872 logs.go:282] 0 containers: []
	W1228 06:55:01.630186  174872 logs.go:284] No container was found matching "storage-provisioner"
	I1228 06:55:01.630273  174872 logs.go:123] Gathering logs for kube-apiserver [48f7e5ef8396b645cf2418dce4f1d1cc2491293b37ed5603ea087c1517820125] ...
	I1228 06:55:01.630363  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48f7e5ef8396b645cf2418dce4f1d1cc2491293b37ed5603ea087c1517820125"
	I1228 06:55:01.717307  174872 logs.go:123] Gathering logs for etcd [c07140d372bde62d2a98606e4f16ca65bd2107b9d582b44d9c3b9964313ff88e] ...
	I1228 06:55:01.717353  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c07140d372bde62d2a98606e4f16ca65bd2107b9d582b44d9c3b9964313ff88e"
	I1228 06:55:01.794852  174872 logs.go:123] Gathering logs for kube-scheduler [e3b673174dc333952a5d15a4c1c13010ebf1520d1a7e491fd37083392b23cb0d] ...
	I1228 06:55:01.794882  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3b673174dc333952a5d15a4c1c13010ebf1520d1a7e491fd37083392b23cb0d"
	I1228 06:55:01.867256  174872 logs.go:123] Gathering logs for CRI-O ...
	I1228 06:55:01.867313  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1228 06:55:01.951716  174872 logs.go:123] Gathering logs for container status ...
	I1228 06:55:01.951753  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 06:55:02.025991  174872 logs.go:123] Gathering logs for describe nodes ...
	I1228 06:55:02.026043  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 06:55:02.131825  174872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 06:55:02.131849  174872 logs.go:123] Gathering logs for kube-scheduler [1aa14a7d75bca6368dd9bd6a02e87552346d6b20405558ba53c2c1c7531f55f5] ...
	I1228 06:55:02.131863  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1aa14a7d75bca6368dd9bd6a02e87552346d6b20405558ba53c2c1c7531f55f5"
	I1228 06:55:02.279293  174872 logs.go:123] Gathering logs for kube-controller-manager [255765325be21023bb69e4f085b53b94ab8a4ecccaa851e2cbb9c042d052b015] ...
	I1228 06:55:02.279395  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 255765325be21023bb69e4f085b53b94ab8a4ecccaa851e2cbb9c042d052b015"
	I1228 06:55:02.345542  174872 logs.go:123] Gathering logs for kubelet ...
	I1228 06:55:02.345572  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 06:55:02.524939  174872 logs.go:123] Gathering logs for dmesg ...
	I1228 06:55:02.524992  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 06:55:05.049118  174872 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1228 06:55:05.049495  174872 api_server.go:315] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1228 06:55:05.049587  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:05.109831  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:05.129294  174872 cri.go:83] list returned 5 containers
	I1228 06:55:05.129317  174872 logs.go:282] 1 containers: [48f7e5ef8396b645cf2418dce4f1d1cc2491293b37ed5603ea087c1517820125]
	I1228 06:55:05.129360  174872 ssh_runner.go:195] Run: which crictl
	I1228 06:55:05.133079  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:05.183250  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:05.202193  174872 cri.go:83] list returned 5 containers
	I1228 06:55:05.202216  174872 logs.go:282] 1 containers: [c07140d372bde62d2a98606e4f16ca65bd2107b9d582b44d9c3b9964313ff88e]
	I1228 06:55:05.202259  174872 ssh_runner.go:195] Run: which crictl
	I1228 06:55:05.205887  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:05.280296  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:05.308922  174872 cri.go:83] list returned 5 containers
	I1228 06:55:05.308958  174872 logs.go:282] 0 containers: []
	W1228 06:55:05.308968  174872 logs.go:284] No container was found matching "coredns"
	I1228 06:55:05.309016  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:05.392819  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:05.422996  174872 cri.go:83] list returned 5 containers
	I1228 06:55:05.423051  174872 logs.go:282] 2 containers: [1aa14a7d75bca6368dd9bd6a02e87552346d6b20405558ba53c2c1c7531f55f5 e3b673174dc333952a5d15a4c1c13010ebf1520d1a7e491fd37083392b23cb0d]
	I1228 06:55:05.423105  174872 ssh_runner.go:195] Run: which crictl
	I1228 06:55:05.428983  174872 ssh_runner.go:195] Run: which crictl
	I1228 06:55:05.432908  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:05.503089  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:05.528201  174872 cri.go:83] list returned 5 containers
	I1228 06:55:05.528233  174872 logs.go:282] 0 containers: []
	W1228 06:55:05.528244  174872 logs.go:284] No container was found matching "kube-proxy"
	I1228 06:55:05.528299  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:05.606010  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:05.642741  174872 cri.go:83] list returned 5 containers
	I1228 06:55:05.642773  174872 logs.go:282] 1 containers: [255765325be21023bb69e4f085b53b94ab8a4ecccaa851e2cbb9c042d052b015]
	I1228 06:55:05.642826  174872 ssh_runner.go:195] Run: which crictl
	I1228 06:55:05.648679  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:05.729361  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:05.759387  174872 cri.go:83] list returned 5 containers
	I1228 06:55:05.759422  174872 logs.go:282] 0 containers: []
	W1228 06:55:05.759436  174872 logs.go:284] No container was found matching "kindnet"
	I1228 06:55:05.759486  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:05.826989  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:05.848386  174872 cri.go:83] list returned 5 containers
	I1228 06:55:05.848412  174872 logs.go:282] 0 containers: []
	W1228 06:55:05.848418  174872 logs.go:284] No container was found matching "storage-provisioner"
	I1228 06:55:05.848428  174872 logs.go:123] Gathering logs for kube-scheduler [e3b673174dc333952a5d15a4c1c13010ebf1520d1a7e491fd37083392b23cb0d] ...
	I1228 06:55:05.848441  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3b673174dc333952a5d15a4c1c13010ebf1520d1a7e491fd37083392b23cb0d"
	I1228 06:55:05.886565  174872 logs.go:123] Gathering logs for CRI-O ...
	I1228 06:55:05.886591  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1228 06:55:05.969389  174872 logs.go:123] Gathering logs for kubelet ...
	I1228 06:55:05.969430  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 06:55:06.141626  174872 logs.go:123] Gathering logs for etcd [c07140d372bde62d2a98606e4f16ca65bd2107b9d582b44d9c3b9964313ff88e] ...
	I1228 06:55:06.141662  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c07140d372bde62d2a98606e4f16ca65bd2107b9d582b44d9c3b9964313ff88e"
	I1228 06:55:06.177909  174872 logs.go:123] Gathering logs for kube-scheduler [1aa14a7d75bca6368dd9bd6a02e87552346d6b20405558ba53c2c1c7531f55f5] ...
	I1228 06:55:06.177947  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1aa14a7d75bca6368dd9bd6a02e87552346d6b20405558ba53c2c1c7531f55f5"
	I1228 06:55:06.260243  174872 logs.go:123] Gathering logs for kube-controller-manager [255765325be21023bb69e4f085b53b94ab8a4ecccaa851e2cbb9c042d052b015] ...
	I1228 06:55:06.260283  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 255765325be21023bb69e4f085b53b94ab8a4ecccaa851e2cbb9c042d052b015"
	I1228 06:55:06.297384  174872 logs.go:123] Gathering logs for container status ...
	I1228 06:55:06.297409  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 06:55:06.334804  174872 logs.go:123] Gathering logs for dmesg ...
	I1228 06:55:06.334831  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 06:55:06.350044  174872 logs.go:123] Gathering logs for describe nodes ...
	I1228 06:55:06.350072  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 06:55:06.410817  174872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 06:55:06.410835  174872 logs.go:123] Gathering logs for kube-apiserver [48f7e5ef8396b645cf2418dce4f1d1cc2491293b37ed5603ea087c1517820125] ...
	I1228 06:55:06.410851  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48f7e5ef8396b645cf2418dce4f1d1cc2491293b37ed5603ea087c1517820125"
	I1228 06:55:05.067192  233405 crio.go:627] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0". assuming images are not preloaded.
	I1228 06:55:05.067218  233405 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0 registry.k8s.io/kube-controller-manager:v1.35.0 registry.k8s.io/kube-scheduler:v1.35.0 registry.k8s.io/kube-proxy:v1.35.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.6-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1228 06:55:05.067315  233405 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0
	I1228 06:55:05.067291  233405 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0
	I1228 06:55:05.067374  233405 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0
	I1228 06:55:05.067351  233405 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0
	I1228 06:55:05.067387  233405 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1228 06:55:05.067377  233405 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1228 06:55:05.067332  233405 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1228 06:55:05.067286  233405 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1228 06:55:05.068855  233405 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1228 06:55:05.068894  233405 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0
	I1228 06:55:05.068931  233405 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0
	I1228 06:55:05.068992  233405 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0
	I1228 06:55:05.069012  233405 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1228 06:55:05.069014  233405 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1228 06:55:05.068858  233405 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1228 06:55:05.069107  233405 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0
	I1228 06:55:05.203846  233405 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1228 06:55:05.208562  233405 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0
	I1228 06:55:05.213719  233405 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0
	I1228 06:55:05.215923  233405 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.6-0
	I1228 06:55:05.219023  233405 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1228 06:55:05.223606  233405 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0
	I1228 06:55:05.230878  233405 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0
	I1228 06:55:05.250965  233405 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1228 06:55:05.251013  233405 cri.go:204] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1228 06:55:05.251080  233405 ssh_runner.go:195] Run: which crictl
	I1228 06:55:05.267244  233405 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0" does not exist at hash "550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc" in container runtime
	I1228 06:55:05.267291  233405 cri.go:204] Removing image: registry.k8s.io/kube-scheduler:v1.35.0
	I1228 06:55:05.267339  233405 ssh_runner.go:195] Run: which crictl
	I1228 06:55:05.267419  233405 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0" does not exist at hash "2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508" in container runtime
	I1228 06:55:05.267444  233405 cri.go:204] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0
	I1228 06:55:05.267475  233405 ssh_runner.go:195] Run: which crictl
	I1228 06:55:05.273677  233405 cache_images.go:118] "registry.k8s.io/etcd:3.6.6-0" needs transfer: "registry.k8s.io/etcd:3.6.6-0" does not exist at hash "0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2" in container runtime
	I1228 06:55:05.273715  233405 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1228 06:55:05.273729  233405 cri.go:204] Removing image: registry.k8s.io/etcd:3.6.6-0
	I1228 06:55:05.273774  233405 ssh_runner.go:195] Run: which crictl
	I1228 06:55:05.273807  233405 cri.go:204] Removing image: registry.k8s.io/pause:3.10.1
	I1228 06:55:05.273839  233405 ssh_runner.go:195] Run: which crictl
	I1228 06:55:05.279799  233405 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0" does not exist at hash "5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499" in container runtime
	I1228 06:55:05.279845  233405 cri.go:204] Removing image: registry.k8s.io/kube-apiserver:v1.35.0
	I1228 06:55:05.279883  233405 ssh_runner.go:195] Run: which crictl
	I1228 06:55:05.283097  233405 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0" does not exist at hash "32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8" in container runtime
	I1228 06:55:05.283134  233405 cri.go:204] Removing image: registry.k8s.io/kube-proxy:v1.35.0
	I1228 06:55:05.283156  233405 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1228 06:55:05.283175  233405 ssh_runner.go:195] Run: which crictl
	I1228 06:55:05.283194  233405 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0
	I1228 06:55:05.283235  233405 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0
	I1228 06:55:05.283260  233405 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1228 06:55:05.283287  233405 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1228 06:55:05.287742  233405 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0
	I1228 06:55:05.320915  233405 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0
	I1228 06:55:05.320931  233405 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1228 06:55:05.320936  233405 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0
	I1228 06:55:05.321213  233405 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0
	I1228 06:55:05.321260  233405 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0
	I1228 06:55:05.321272  233405 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1228 06:55:05.321216  233405 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1228 06:55:05.366234  233405 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0
	I1228 06:55:05.366268  233405 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0
	I1228 06:55:05.366367  233405 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1228 06:55:05.370136  233405 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0
	I1228 06:55:05.370247  233405 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1228 06:55:05.370314  233405 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1228 06:55:05.370381  233405 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0
	I1228 06:55:05.415404  233405 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1228 06:55:05.415500  233405 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0
	I1228 06:55:05.415507  233405 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1228 06:55:05.418056  233405 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0
	I1228 06:55:05.418092  233405 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1228 06:55:05.418141  233405 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0
	I1228 06:55:05.418171  233405 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1228 06:55:05.420852  233405 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0
	I1228 06:55:05.420941  233405 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0
	I1228 06:55:05.420954  233405 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0
	I1228 06:55:05.420996  233405 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0
	I1228 06:55:05.421048  233405 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0
	I1228 06:55:05.421109  233405 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0
	I1228 06:55:05.447578  233405 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1228 06:55:05.447621  233405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1228 06:55:05.447685  233405 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1228 06:55:05.447712  233405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1228 06:55:05.447741  233405 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0': No such file or directory
	I1228 06:55:05.447712  233405 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0': No such file or directory
	I1228 06:55:05.447759  233405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0 (23144960 bytes)
	I1228 06:55:05.447762  233405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0 (17248256 bytes)
	I1228 06:55:05.447584  233405 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0
	I1228 06:55:05.447853  233405 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0
	I1228 06:55:05.447873  233405 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.6-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.6-0': No such file or directory
	I1228 06:55:05.447894  233405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 --> /var/lib/minikube/images/etcd_3.6.6-0 (23653376 bytes)
	I1228 06:55:05.447941  233405 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0': No such file or directory
	I1228 06:55:05.447959  233405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0 (27696640 bytes)
	I1228 06:55:05.478825  233405 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0': No such file or directory
	I1228 06:55:05.478867  233405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0 (25791488 bytes)
	I1228 06:55:05.546288  233405 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1228 06:55:05.546360  233405 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1228 06:55:05.915168  233405 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1228 06:55:05.999890  233405 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1228 06:55:06.041509  233405 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0
	I1228 06:55:06.041594  233405 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0
	I1228 06:55:06.047913  233405 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1228 06:55:06.047959  233405 cri.go:204] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1228 06:55:06.048013  233405 ssh_runner.go:195] Run: which crictl
	I1228 06:55:07.174393  233405 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0: (1.132773819s)
	I1228 06:55:07.174435  233405 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0 from cache
	I1228 06:55:07.174453  233405 ssh_runner.go:235] Completed: which crictl: (1.126418106s)
	I1228 06:55:07.174459  233405 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0
	I1228 06:55:07.174520  233405 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1228 06:55:07.174556  233405 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0
	I1228 06:55:08.362615  233405 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0: (1.188027628s)
	I1228 06:55:08.362650  233405 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0 from cache
	I1228 06:55:08.362656  233405 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.188112473s)
	I1228 06:55:08.362681  233405 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1228 06:55:08.362727  233405 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1228 06:55:08.362738  233405 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1228 06:55:08.391252  233405 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1228 06:55:09.638967  233405 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.276205495s)
	I1228 06:55:09.638995  233405 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1228 06:55:09.639055  233405 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.6-0
	I1228 06:55:09.639025  233405 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.247741522s)
	I1228 06:55:09.639105  233405 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1228 06:55:09.639114  233405 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.6-0
	I1228 06:55:09.639195  233405 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1228 06:55:05.606788  231701 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1228 06:55:05.621441  231701 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1228 06:55:05.621474  231701 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1228 06:55:06.106077  231701 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1228 06:55:06.112556  231701 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1228 06:55:06.114545  231701 api_server.go:141] control plane version: v1.35.0
	I1228 06:55:06.114619  231701 api_server.go:131] duration metric: took 1.008694504s to wait for apiserver health ...
	I1228 06:55:06.114642  231701 system_pods.go:43] waiting for kube-system pods to appear ...
	I1228 06:55:06.123069  231701 system_pods.go:59] 8 kube-system pods found
	I1228 06:55:06.123127  231701 system_pods.go:61] "coredns-7d764666f9-vc57t" [38c655ec-8a9c-4593-a203-6279b34e6405] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:55:06.123146  231701 system_pods.go:61] "etcd-test-preload-785573" [a1de6eee-293b-43ad-abb3-42725da1610b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 06:55:06.123168  231701 system_pods.go:61] "kindnet-snn92" [a1003399-a924-4031-b9fd-6e13641ece57] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 06:55:06.123185  231701 system_pods.go:61] "kube-apiserver-test-preload-785573" [fabcdb5c-ba59-4892-9a5a-6a149079d46b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 06:55:06.123202  231701 system_pods.go:61] "kube-controller-manager-test-preload-785573" [e44e5af9-1847-4d1f-96f1-8bd0a271fb7e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 06:55:06.123221  231701 system_pods.go:61] "kube-proxy-2qrxs" [73dcdb00-ed0b-4b04-811a-3fcd38df3964] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 06:55:06.123229  231701 system_pods.go:61] "kube-scheduler-test-preload-785573" [2d39491e-c2e6-4344-965c-66e4e6fabf05] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 06:55:06.123242  231701 system_pods.go:61] "storage-provisioner" [e42dbaeb-f1cb-492e-9982-72ecbd4d95d7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:55:06.123256  231701 system_pods.go:74] duration metric: took 8.596714ms to wait for pod list to return data ...
	I1228 06:55:06.123273  231701 default_sa.go:34] waiting for default service account to be created ...
	I1228 06:55:06.128133  231701 default_sa.go:45] found service account: "default"
	I1228 06:55:06.128189  231701 default_sa.go:55] duration metric: took 4.900554ms for default service account to be created ...
	I1228 06:55:06.128202  231701 system_pods.go:116] waiting for k8s-apps to be running ...
	I1228 06:55:06.131849  231701 system_pods.go:86] 8 kube-system pods found
	I1228 06:55:06.131887  231701 system_pods.go:89] "coredns-7d764666f9-vc57t" [38c655ec-8a9c-4593-a203-6279b34e6405] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:55:06.131899  231701 system_pods.go:89] "etcd-test-preload-785573" [a1de6eee-293b-43ad-abb3-42725da1610b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 06:55:06.131909  231701 system_pods.go:89] "kindnet-snn92" [a1003399-a924-4031-b9fd-6e13641ece57] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 06:55:06.131920  231701 system_pods.go:89] "kube-apiserver-test-preload-785573" [fabcdb5c-ba59-4892-9a5a-6a149079d46b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 06:55:06.131930  231701 system_pods.go:89] "kube-controller-manager-test-preload-785573" [e44e5af9-1847-4d1f-96f1-8bd0a271fb7e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 06:55:06.131941  231701 system_pods.go:89] "kube-proxy-2qrxs" [73dcdb00-ed0b-4b04-811a-3fcd38df3964] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 06:55:06.131957  231701 system_pods.go:89] "kube-scheduler-test-preload-785573" [2d39491e-c2e6-4344-965c-66e4e6fabf05] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 06:55:06.131974  231701 system_pods.go:89] "storage-provisioner" [e42dbaeb-f1cb-492e-9982-72ecbd4d95d7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:55:06.131990  231701 system_pods.go:126] duration metric: took 3.780817ms to wait for k8s-apps to be running ...
	I1228 06:55:06.132001  231701 system_svc.go:44] waiting for kubelet service to be running ....
	I1228 06:55:06.132059  231701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:55:06.147846  231701 system_svc.go:56] duration metric: took 15.83889ms WaitForService to wait for kubelet
	I1228 06:55:06.147871  231701 kubeadm.go:587] duration metric: took 2.648638525s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 06:55:06.147893  231701 node_conditions.go:102] verifying NodePressure condition ...
	I1228 06:55:06.150486  231701 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1228 06:55:06.150509  231701 node_conditions.go:123] node cpu capacity is 8
	I1228 06:55:06.150521  231701 node_conditions.go:105] duration metric: took 2.623898ms to run NodePressure ...
	I1228 06:55:06.150534  231701 start.go:242] waiting for startup goroutines ...
	I1228 06:55:06.150540  231701 start.go:247] waiting for cluster config update ...
	I1228 06:55:06.150555  231701 start.go:256] writing updated cluster config ...
	I1228 06:55:06.150793  231701 ssh_runner.go:195] Run: rm -f paused
	I1228 06:55:06.155042  231701 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 06:55:06.155895  231701 kapi.go:59] client config for test-preload-785573: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22352-5550/.minikube/profiles/test-preload-785573/client.crt", KeyFile:"/home/jenkins/minikube-integration/22352-5550/.minikube/profiles/test-preload-785573/client.key", CAFile:"/home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2780200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1228 06:55:06.158782  231701 pod_ready.go:83] waiting for pod "coredns-7d764666f9-vc57t" in "kube-system" namespace to be "Ready" or be gone ...
	W1228 06:55:08.164156  231701 pod_ready.go:104] pod "coredns-7d764666f9-vc57t" is not "Ready", error: <nil>
	W1228 06:55:10.167967  231701 pod_ready.go:104] pod "coredns-7d764666f9-vc57t" is not "Ready", error: <nil>
	W1228 06:55:06.218156  226053 node_ready.go:57] node "old-k8s-version-694122" has "Ready":"False" status (will retry)
	W1228 06:55:08.717775  226053 node_ready.go:57] node "old-k8s-version-694122" has "Ready":"False" status (will retry)
	I1228 06:55:08.953953  174872 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1228 06:55:08.954469  174872 api_server.go:315] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1228 06:55:08.954568  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:09.006787  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:09.030296  174872 cri.go:83] list returned 5 containers
	I1228 06:55:09.030325  174872 logs.go:282] 1 containers: [48f7e5ef8396b645cf2418dce4f1d1cc2491293b37ed5603ea087c1517820125]
	I1228 06:55:09.030380  174872 ssh_runner.go:195] Run: which crictl
	I1228 06:55:09.034404  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:09.086323  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:09.107898  174872 cri.go:83] list returned 5 containers
	I1228 06:55:09.107932  174872 logs.go:282] 1 containers: [c07140d372bde62d2a98606e4f16ca65bd2107b9d582b44d9c3b9964313ff88e]
	I1228 06:55:09.107996  174872 ssh_runner.go:195] Run: which crictl
	I1228 06:55:09.111909  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:09.172018  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:09.194728  174872 cri.go:83] list returned 5 containers
	I1228 06:55:09.194755  174872 logs.go:282] 0 containers: []
	W1228 06:55:09.194766  174872 logs.go:284] No container was found matching "coredns"
	I1228 06:55:09.194810  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:09.257989  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:09.280977  174872 cri.go:83] list returned 5 containers
	I1228 06:55:09.281006  174872 logs.go:282] 2 containers: [1aa14a7d75bca6368dd9bd6a02e87552346d6b20405558ba53c2c1c7531f55f5 e3b673174dc333952a5d15a4c1c13010ebf1520d1a7e491fd37083392b23cb0d]
	I1228 06:55:09.281102  174872 ssh_runner.go:195] Run: which crictl
	I1228 06:55:09.284729  174872 ssh_runner.go:195] Run: which crictl
	I1228 06:55:09.288522  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:09.349261  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:09.370019  174872 cri.go:83] list returned 5 containers
	I1228 06:55:09.370059  174872 logs.go:282] 0 containers: []
	W1228 06:55:09.370068  174872 logs.go:284] No container was found matching "kube-proxy"
	I1228 06:55:09.370116  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:09.423115  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:09.445209  174872 cri.go:83] list returned 5 containers
	I1228 06:55:09.445236  174872 logs.go:282] 1 containers: [255765325be21023bb69e4f085b53b94ab8a4ecccaa851e2cbb9c042d052b015]
	I1228 06:55:09.445280  174872 ssh_runner.go:195] Run: which crictl
	I1228 06:55:09.449075  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:09.512138  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:09.537786  174872 cri.go:83] list returned 5 containers
	I1228 06:55:09.537817  174872 logs.go:282] 0 containers: []
	W1228 06:55:09.537828  174872 logs.go:284] No container was found matching "kindnet"
	I1228 06:55:09.537875  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:09.597999  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:09.617847  174872 cri.go:83] list returned 5 containers
	I1228 06:55:09.617871  174872 logs.go:282] 0 containers: []
	W1228 06:55:09.617882  174872 logs.go:284] No container was found matching "storage-provisioner"
	I1228 06:55:09.617892  174872 logs.go:123] Gathering logs for kube-controller-manager [255765325be21023bb69e4f085b53b94ab8a4ecccaa851e2cbb9c042d052b015] ...
	I1228 06:55:09.617905  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 255765325be21023bb69e4f085b53b94ab8a4ecccaa851e2cbb9c042d052b015"
	I1228 06:55:09.653929  174872 logs.go:123] Gathering logs for kubelet ...
	I1228 06:55:09.653966  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 06:55:09.756149  174872 logs.go:123] Gathering logs for describe nodes ...
	I1228 06:55:09.756185  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 06:55:09.814199  174872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 06:55:09.814223  174872 logs.go:123] Gathering logs for kube-scheduler [1aa14a7d75bca6368dd9bd6a02e87552346d6b20405558ba53c2c1c7531f55f5] ...
	I1228 06:55:09.814239  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1aa14a7d75bca6368dd9bd6a02e87552346d6b20405558ba53c2c1c7531f55f5"
	I1228 06:55:09.898951  174872 logs.go:123] Gathering logs for CRI-O ...
	I1228 06:55:09.898990  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1228 06:55:09.964989  174872 logs.go:123] Gathering logs for container status ...
	I1228 06:55:09.965019  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 06:55:10.003021  174872 logs.go:123] Gathering logs for dmesg ...
	I1228 06:55:10.003068  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 06:55:10.017316  174872 logs.go:123] Gathering logs for kube-apiserver [48f7e5ef8396b645cf2418dce4f1d1cc2491293b37ed5603ea087c1517820125] ...
	I1228 06:55:10.017339  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48f7e5ef8396b645cf2418dce4f1d1cc2491293b37ed5603ea087c1517820125"
	I1228 06:55:10.056838  174872 logs.go:123] Gathering logs for etcd [c07140d372bde62d2a98606e4f16ca65bd2107b9d582b44d9c3b9964313ff88e] ...
	I1228 06:55:10.056864  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c07140d372bde62d2a98606e4f16ca65bd2107b9d582b44d9c3b9964313ff88e"
	I1228 06:55:10.090942  174872 logs.go:123] Gathering logs for kube-scheduler [e3b673174dc333952a5d15a4c1c13010ebf1520d1a7e491fd37083392b23cb0d] ...
	I1228 06:55:10.090967  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3b673174dc333952a5d15a4c1c13010ebf1520d1a7e491fd37083392b23cb0d"
	I1228 06:55:10.869361  233405 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.6-0: (1.230222973s)
	I1228 06:55:10.869387  233405 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 from cache
	I1228 06:55:10.869411  233405 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0
	I1228 06:55:10.869416  233405 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.230200212s)
	I1228 06:55:10.869452  233405 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1228 06:55:10.869478  233405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1228 06:55:10.869490  233405 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0
	I1228 06:55:12.216852  233405 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0: (1.347335974s)
	I1228 06:55:12.216891  233405 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0 from cache
	I1228 06:55:12.216920  233405 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0
	I1228 06:55:12.216982  233405 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0
	I1228 06:55:13.347867  233405 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0: (1.130860368s)
	I1228 06:55:13.347901  233405 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0 from cache
	I1228 06:55:13.347929  233405 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1228 06:55:13.347972  233405 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1228 06:55:13.920948  233405 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1228 06:55:13.921003  233405 cache_images.go:125] Successfully loaded all cached images
	I1228 06:55:13.921011  233405 cache_images.go:94] duration metric: took 8.85377875s to LoadCachedImages
	I1228 06:55:13.921041  233405 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0 crio true true} ...
	I1228 06:55:13.921147  233405 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-950460 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:no-preload-950460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 06:55:13.921232  233405 ssh_runner.go:195] Run: crio config
	I1228 06:55:13.966460  233405 cni.go:84] Creating CNI manager for ""
	I1228 06:55:13.966487  233405 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1228 06:55:13.966506  233405 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1228 06:55:13.966532  233405 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-950460 NodeName:no-preload-950460 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 06:55:13.966697  233405 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-950460"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 06:55:13.966830  233405 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1228 06:55:13.975302  233405 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0': No such file or directory
	
	Initiating transfer...
	I1228 06:55:13.975360  233405 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0
	I1228 06:55:13.983368  233405 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubectl.sha256
	I1228 06:55:13.983383  233405 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubelet.sha256
	I1228 06:55:13.983394  233405 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubeadm.sha256
	I1228 06:55:13.983435  233405 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:55:13.983450  233405 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubectl
	I1228 06:55:13.983468  233405 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubeadm
	I1228 06:55:13.987819  233405 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubeadm': No such file or directory
	I1228 06:55:13.987847  233405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/cache/linux/amd64/v1.35.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0/kubeadm (72368312 bytes)
	I1228 06:55:13.988005  233405 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubectl': No such file or directory
	I1228 06:55:13.988046  233405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/cache/linux/amd64/v1.35.0/kubectl --> /var/lib/minikube/binaries/v1.35.0/kubectl (58597560 bytes)
	I1228 06:55:14.004912  233405 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubelet
	I1228 06:55:14.039133  233405 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubelet': No such file or directory
	I1228 06:55:14.039175  233405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/cache/linux/amd64/v1.35.0/kubelet --> /var/lib/minikube/binaries/v1.35.0/kubelet (58110244 bytes)
	I1228 06:55:14.609159  233405 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 06:55:14.618746  233405 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1228 06:55:14.635696  233405 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 06:55:14.826095  233405 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1228 06:55:14.841177  233405 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1228 06:55:14.845718  233405 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 06:55:14.877444  233405 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:55:14.974447  233405 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:55:15.002375  233405 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460 for IP: 192.168.94.2
	I1228 06:55:15.002401  233405 certs.go:195] generating shared ca certs ...
	I1228 06:55:15.002436  233405 certs.go:227] acquiring lock for ca certs: {Name:mk77ee411d20e2d367f536371cb4debf1ce5f664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:55:15.002565  233405 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-5550/.minikube/ca.key
	I1228 06:55:15.002607  233405 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.key
	I1228 06:55:15.002617  233405 certs.go:257] generating profile certs ...
	I1228 06:55:15.002672  233405 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/client.key
	I1228 06:55:15.002685  233405 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/client.crt with IP's: []
	W1228 06:55:12.664497  231701 pod_ready.go:104] pod "coredns-7d764666f9-vc57t" is not "Ready", error: <nil>
	W1228 06:55:14.762070  231701 pod_ready.go:104] pod "coredns-7d764666f9-vc57t" is not "Ready", error: <nil>
	I1228 06:55:15.054265  233405 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/client.crt ...
	I1228 06:55:15.054297  233405 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/client.crt: {Name:mk55beeae116ebee4cf0b7ccb020697450d07cc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:55:15.054441  233405 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/client.key ...
	I1228 06:55:15.054452  233405 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/client.key: {Name:mk0fbe1e6808d360f446fd8ed16fb053e211e7e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:55:15.054538  233405 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/apiserver.key.3468f947
	I1228 06:55:15.054554  233405 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/apiserver.crt.3468f947 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1228 06:55:15.132541  233405 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/apiserver.crt.3468f947 ...
	I1228 06:55:15.132566  233405 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/apiserver.crt.3468f947: {Name:mkf6535639236d952546771000727103763ca10c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:55:15.132715  233405 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/apiserver.key.3468f947 ...
	I1228 06:55:15.132728  233405 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/apiserver.key.3468f947: {Name:mk761be1f9252e10fe7f5a30588abb1ea751aaa4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:55:15.132797  233405 certs.go:382] copying /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/apiserver.crt.3468f947 -> /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/apiserver.crt
	I1228 06:55:15.132881  233405 certs.go:386] copying /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/apiserver.key.3468f947 -> /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/apiserver.key
	I1228 06:55:15.132986  233405 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/proxy-client.key
	I1228 06:55:15.133014  233405 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/proxy-client.crt with IP's: []
	I1228 06:55:15.242125  233405 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/proxy-client.crt ...
	I1228 06:55:15.242157  233405 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/proxy-client.crt: {Name:mk0ce5b3f31818a33f95af983e4b4e1bf349f9f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:55:15.242315  233405 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/proxy-client.key ...
	I1228 06:55:15.242330  233405 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/proxy-client.key: {Name:mke908b1fc53212aed138a8616862ebf97a7720f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:55:15.242512  233405 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076.pem (1338 bytes)
	W1228 06:55:15.242551  233405 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076_empty.pem, impossibly tiny 0 bytes
	I1228 06:55:15.242562  233405 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem (1675 bytes)
	I1228 06:55:15.242588  233405 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem (1082 bytes)
	I1228 06:55:15.242613  233405 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem (1123 bytes)
	I1228 06:55:15.242635  233405 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem (1679 bytes)
	I1228 06:55:15.242674  233405 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem (1708 bytes)
	I1228 06:55:15.243251  233405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 06:55:15.261499  233405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1228 06:55:15.278853  233405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 06:55:15.296467  233405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 06:55:15.313636  233405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1228 06:55:15.331077  233405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1228 06:55:15.348386  233405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 06:55:15.366550  233405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1228 06:55:15.383949  233405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076.pem --> /usr/share/ca-certificates/9076.pem (1338 bytes)
	I1228 06:55:15.403550  233405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem --> /usr/share/ca-certificates/90762.pem (1708 bytes)
	I1228 06:55:15.421860  233405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 06:55:15.439359  233405 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 06:55:15.454017  233405 ssh_runner.go:195] Run: openssl version
	I1228 06:55:15.461053  233405 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9076.pem
	I1228 06:55:15.469381  233405 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9076.pem /etc/ssl/certs/9076.pem
	I1228 06:55:15.477454  233405 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9076.pem
	I1228 06:55:15.481543  233405 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:31 /usr/share/ca-certificates/9076.pem
	I1228 06:55:15.481612  233405 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9076.pem
	I1228 06:55:15.516560  233405 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 06:55:15.525024  233405 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9076.pem /etc/ssl/certs/51391683.0
	I1228 06:55:15.533308  233405 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/90762.pem
	I1228 06:55:15.541217  233405 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/90762.pem /etc/ssl/certs/90762.pem
	I1228 06:55:15.549258  233405 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/90762.pem
	I1228 06:55:15.554096  233405 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:31 /usr/share/ca-certificates/90762.pem
	I1228 06:55:15.554173  233405 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/90762.pem
	I1228 06:55:15.589154  233405 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 06:55:15.597733  233405 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/90762.pem /etc/ssl/certs/3ec20f2e.0
	I1228 06:55:15.605442  233405 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:55:15.612956  233405 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 06:55:15.620540  233405 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:55:15.624533  233405 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:28 /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:55:15.624591  233405 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:55:15.662498  233405 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 06:55:15.671019  233405 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1228 06:55:15.678845  233405 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 06:55:15.682599  233405 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1228 06:55:15.682654  233405 kubeadm.go:401] StartCluster: {Name:no-preload-950460 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-950460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:55:15.682742  233405 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:15.738693  233405 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	W1228 06:55:15.752984  233405 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:55:15Z" level=error msg="open /run/runc: no such file or directory"
	I1228 06:55:15.753084  233405 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 06:55:15.761744  233405 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1228 06:55:15.770786  233405 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1228 06:55:15.770836  233405 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1228 06:55:15.779094  233405 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1228 06:55:15.779111  233405 kubeadm.go:158] found existing configuration files:
	
	I1228 06:55:15.779149  233405 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1228 06:55:15.787830  233405 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1228 06:55:15.787882  233405 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1228 06:55:15.795432  233405 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1228 06:55:15.804128  233405 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1228 06:55:15.804183  233405 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1228 06:55:15.812378  233405 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1228 06:55:15.822300  233405 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1228 06:55:15.822357  233405 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1228 06:55:15.831123  233405 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1228 06:55:15.839310  233405 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1228 06:55:15.839361  233405 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1228 06:55:15.846560  233405 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1228 06:55:15.881789  233405 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1228 06:55:15.881865  233405 kubeadm.go:319] [preflight] Running pre-flight checks
	I1228 06:55:15.952004  233405 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1228 06:55:15.952113  233405 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1228 06:55:15.952157  233405 kubeadm.go:319] OS: Linux
	I1228 06:55:15.952212  233405 kubeadm.go:319] CGROUPS_CPU: enabled
	I1228 06:55:15.952273  233405 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1228 06:55:15.952331  233405 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1228 06:55:15.952396  233405 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1228 06:55:15.952466  233405 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1228 06:55:15.952539  233405 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1228 06:55:15.952603  233405 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1228 06:55:15.952697  233405 kubeadm.go:319] CGROUPS_IO: enabled
	I1228 06:55:16.008956  233405 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1228 06:55:16.009147  233405 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1228 06:55:16.009286  233405 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1228 06:55:16.022568  233405 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1228 06:55:11.216832  226053 node_ready.go:57] node "old-k8s-version-694122" has "Ready":"False" status (will retry)
	W1228 06:55:13.217956  226053 node_ready.go:57] node "old-k8s-version-694122" has "Ready":"False" status (will retry)
	I1228 06:55:14.819653  226053 node_ready.go:49] node "old-k8s-version-694122" is "Ready"
	I1228 06:55:14.819688  226053 node_ready.go:38] duration metric: took 12.605606428s for node "old-k8s-version-694122" to be "Ready" ...
	I1228 06:55:14.819706  226053 api_server.go:52] waiting for apiserver process to appear ...
	I1228 06:55:14.819762  226053 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 06:55:14.831997  226053 api_server.go:72] duration metric: took 13.225455145s to wait for apiserver process to appear ...
	I1228 06:55:14.832058  226053 api_server.go:88] waiting for apiserver healthz status ...
	I1228 06:55:14.832079  226053 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1228 06:55:14.836187  226053 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1228 06:55:14.837375  226053 api_server.go:141] control plane version: v1.28.0
	I1228 06:55:14.837400  226053 api_server.go:131] duration metric: took 5.334292ms to wait for apiserver health ...
	I1228 06:55:14.837410  226053 system_pods.go:43] waiting for kube-system pods to appear ...
	I1228 06:55:14.841637  226053 system_pods.go:59] 8 kube-system pods found
	I1228 06:55:14.841679  226053 system_pods.go:61] "coredns-5dd5756b68-f75js" [90c72704-97b2-410e-b66d-bf7b621758a3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:55:14.841692  226053 system_pods.go:61] "etcd-old-k8s-version-694122" [ee40c5dc-0ca3-4350-aefa-94fe04b50859] Running
	I1228 06:55:14.841703  226053 system_pods.go:61] "kindnet-v7rhd" [0b1ff99b-7679-44db-834b-87265079d9b1] Running
	I1228 06:55:14.841709  226053 system_pods.go:61] "kube-apiserver-old-k8s-version-694122" [e587cdcd-8444-488d-8424-cd8a692b94dc] Running
	I1228 06:55:14.841718  226053 system_pods.go:61] "kube-controller-manager-old-k8s-version-694122" [d7b1699f-eae7-4176-a2fc-dae6aa56ff7f] Running
	I1228 06:55:14.841723  226053 system_pods.go:61] "kube-proxy-ckjcc" [1e2636ad-f5a4-4488-bb26-9f14615b487e] Running
	I1228 06:55:14.841731  226053 system_pods.go:61] "kube-scheduler-old-k8s-version-694122" [1ecff9c6-b0fd-4d25-b7df-2ab3ec775757] Running
	I1228 06:55:14.841739  226053 system_pods.go:61] "storage-provisioner" [1ac23a0e-e11c-4689-880a-1d7501b8178f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:55:14.841749  226053 system_pods.go:74] duration metric: took 4.332118ms to wait for pod list to return data ...
	I1228 06:55:14.841763  226053 default_sa.go:34] waiting for default service account to be created ...
	I1228 06:55:14.843917  226053 default_sa.go:45] found service account: "default"
	I1228 06:55:14.843935  226053 default_sa.go:55] duration metric: took 2.16627ms for default service account to be created ...
	I1228 06:55:14.843942  226053 system_pods.go:116] waiting for k8s-apps to be running ...
	I1228 06:55:14.847228  226053 system_pods.go:86] 8 kube-system pods found
	I1228 06:55:14.847258  226053 system_pods.go:89] "coredns-5dd5756b68-f75js" [90c72704-97b2-410e-b66d-bf7b621758a3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:55:14.847265  226053 system_pods.go:89] "etcd-old-k8s-version-694122" [ee40c5dc-0ca3-4350-aefa-94fe04b50859] Running
	I1228 06:55:14.847275  226053 system_pods.go:89] "kindnet-v7rhd" [0b1ff99b-7679-44db-834b-87265079d9b1] Running
	I1228 06:55:14.847281  226053 system_pods.go:89] "kube-apiserver-old-k8s-version-694122" [e587cdcd-8444-488d-8424-cd8a692b94dc] Running
	I1228 06:55:14.847287  226053 system_pods.go:89] "kube-controller-manager-old-k8s-version-694122" [d7b1699f-eae7-4176-a2fc-dae6aa56ff7f] Running
	I1228 06:55:14.847290  226053 system_pods.go:89] "kube-proxy-ckjcc" [1e2636ad-f5a4-4488-bb26-9f14615b487e] Running
	I1228 06:55:14.847295  226053 system_pods.go:89] "kube-scheduler-old-k8s-version-694122" [1ecff9c6-b0fd-4d25-b7df-2ab3ec775757] Running
	I1228 06:55:14.847309  226053 system_pods.go:89] "storage-provisioner" [1ac23a0e-e11c-4689-880a-1d7501b8178f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:55:14.847342  226053 retry.go:84] will retry after 300ms: missing components: kube-dns
	I1228 06:55:15.131341  226053 system_pods.go:86] 8 kube-system pods found
	I1228 06:55:15.131377  226053 system_pods.go:89] "coredns-5dd5756b68-f75js" [90c72704-97b2-410e-b66d-bf7b621758a3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:55:15.131384  226053 system_pods.go:89] "etcd-old-k8s-version-694122" [ee40c5dc-0ca3-4350-aefa-94fe04b50859] Running
	I1228 06:55:15.131391  226053 system_pods.go:89] "kindnet-v7rhd" [0b1ff99b-7679-44db-834b-87265079d9b1] Running
	I1228 06:55:15.131397  226053 system_pods.go:89] "kube-apiserver-old-k8s-version-694122" [e587cdcd-8444-488d-8424-cd8a692b94dc] Running
	I1228 06:55:15.131403  226053 system_pods.go:89] "kube-controller-manager-old-k8s-version-694122" [d7b1699f-eae7-4176-a2fc-dae6aa56ff7f] Running
	I1228 06:55:15.131408  226053 system_pods.go:89] "kube-proxy-ckjcc" [1e2636ad-f5a4-4488-bb26-9f14615b487e] Running
	I1228 06:55:15.131413  226053 system_pods.go:89] "kube-scheduler-old-k8s-version-694122" [1ecff9c6-b0fd-4d25-b7df-2ab3ec775757] Running
	I1228 06:55:15.131424  226053 system_pods.go:89] "storage-provisioner" [1ac23a0e-e11c-4689-880a-1d7501b8178f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:55:15.392380  226053 system_pods.go:86] 8 kube-system pods found
	I1228 06:55:15.392411  226053 system_pods.go:89] "coredns-5dd5756b68-f75js" [90c72704-97b2-410e-b66d-bf7b621758a3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:55:15.392416  226053 system_pods.go:89] "etcd-old-k8s-version-694122" [ee40c5dc-0ca3-4350-aefa-94fe04b50859] Running
	I1228 06:55:15.392421  226053 system_pods.go:89] "kindnet-v7rhd" [0b1ff99b-7679-44db-834b-87265079d9b1] Running
	I1228 06:55:15.392425  226053 system_pods.go:89] "kube-apiserver-old-k8s-version-694122" [e587cdcd-8444-488d-8424-cd8a692b94dc] Running
	I1228 06:55:15.392429  226053 system_pods.go:89] "kube-controller-manager-old-k8s-version-694122" [d7b1699f-eae7-4176-a2fc-dae6aa56ff7f] Running
	I1228 06:55:15.392432  226053 system_pods.go:89] "kube-proxy-ckjcc" [1e2636ad-f5a4-4488-bb26-9f14615b487e] Running
	I1228 06:55:15.392435  226053 system_pods.go:89] "kube-scheduler-old-k8s-version-694122" [1ecff9c6-b0fd-4d25-b7df-2ab3ec775757] Running
	I1228 06:55:15.392439  226053 system_pods.go:89] "storage-provisioner" [1ac23a0e-e11c-4689-880a-1d7501b8178f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:55:15.785793  226053 system_pods.go:86] 8 kube-system pods found
	I1228 06:55:15.785826  226053 system_pods.go:89] "coredns-5dd5756b68-f75js" [90c72704-97b2-410e-b66d-bf7b621758a3] Running
	I1228 06:55:15.785834  226053 system_pods.go:89] "etcd-old-k8s-version-694122" [ee40c5dc-0ca3-4350-aefa-94fe04b50859] Running
	I1228 06:55:15.785839  226053 system_pods.go:89] "kindnet-v7rhd" [0b1ff99b-7679-44db-834b-87265079d9b1] Running
	I1228 06:55:15.785845  226053 system_pods.go:89] "kube-apiserver-old-k8s-version-694122" [e587cdcd-8444-488d-8424-cd8a692b94dc] Running
	I1228 06:55:15.785851  226053 system_pods.go:89] "kube-controller-manager-old-k8s-version-694122" [d7b1699f-eae7-4176-a2fc-dae6aa56ff7f] Running
	I1228 06:55:15.785857  226053 system_pods.go:89] "kube-proxy-ckjcc" [1e2636ad-f5a4-4488-bb26-9f14615b487e] Running
	I1228 06:55:15.785863  226053 system_pods.go:89] "kube-scheduler-old-k8s-version-694122" [1ecff9c6-b0fd-4d25-b7df-2ab3ec775757] Running
	I1228 06:55:15.785868  226053 system_pods.go:89] "storage-provisioner" [1ac23a0e-e11c-4689-880a-1d7501b8178f] Running
	I1228 06:55:15.785880  226053 system_pods.go:126] duration metric: took 941.933048ms to wait for k8s-apps to be running ...
	I1228 06:55:15.785890  226053 system_svc.go:44] waiting for kubelet service to be running ....
	I1228 06:55:15.785948  226053 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:55:15.799140  226053 system_svc.go:56] duration metric: took 13.242416ms WaitForService to wait for kubelet
	I1228 06:55:15.799177  226053 kubeadm.go:587] duration metric: took 14.192639496s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 06:55:15.799198  226053 node_conditions.go:102] verifying NodePressure condition ...
	I1228 06:55:15.801653  226053 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1228 06:55:15.801674  226053 node_conditions.go:123] node cpu capacity is 8
	I1228 06:55:15.801699  226053 node_conditions.go:105] duration metric: took 2.493946ms to run NodePressure ...
	I1228 06:55:15.801714  226053 start.go:242] waiting for startup goroutines ...
	I1228 06:55:15.801728  226053 start.go:247] waiting for cluster config update ...
	I1228 06:55:15.801745  226053 start.go:256] writing updated cluster config ...
	I1228 06:55:15.802047  226053 ssh_runner.go:195] Run: rm -f paused
	I1228 06:55:15.806185  226053 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 06:55:15.810345  226053 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-f75js" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:55:15.816783  226053 pod_ready.go:94] pod "coredns-5dd5756b68-f75js" is "Ready"
	I1228 06:55:15.816804  226053 pod_ready.go:86] duration metric: took 6.438124ms for pod "coredns-5dd5756b68-f75js" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:55:15.820006  226053 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-694122" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:55:15.824411  226053 pod_ready.go:94] pod "etcd-old-k8s-version-694122" is "Ready"
	I1228 06:55:15.824432  226053 pod_ready.go:86] duration metric: took 4.404868ms for pod "etcd-old-k8s-version-694122" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:55:15.826744  226053 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-694122" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:55:15.830557  226053 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-694122" is "Ready"
	I1228 06:55:15.830585  226053 pod_ready.go:86] duration metric: took 3.81495ms for pod "kube-apiserver-old-k8s-version-694122" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:55:15.833154  226053 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-694122" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:55:12.627987  174872 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1228 06:55:12.628475  174872 api_server.go:315] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1228 06:55:12.628557  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:12.691301  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:12.712278  174872 cri.go:83] list returned 5 containers
	I1228 06:55:12.712311  174872 logs.go:282] 1 containers: [48f7e5ef8396b645cf2418dce4f1d1cc2491293b37ed5603ea087c1517820125]
	I1228 06:55:12.712365  174872 ssh_runner.go:195] Run: which crictl
	I1228 06:55:12.716761  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:12.776651  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:12.800548  174872 cri.go:83] list returned 5 containers
	I1228 06:55:12.800576  174872 logs.go:282] 1 containers: [c07140d372bde62d2a98606e4f16ca65bd2107b9d582b44d9c3b9964313ff88e]
	I1228 06:55:12.800643  174872 ssh_runner.go:195] Run: which crictl
	I1228 06:55:12.805351  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:12.862880  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:12.887697  174872 cri.go:83] list returned 5 containers
	I1228 06:55:12.887723  174872 logs.go:282] 0 containers: []
	W1228 06:55:12.887733  174872 logs.go:284] No container was found matching "coredns"
	I1228 06:55:12.887773  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:12.941085  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:12.962545  174872 cri.go:83] list returned 5 containers
	I1228 06:55:12.962576  174872 logs.go:282] 2 containers: [1aa14a7d75bca6368dd9bd6a02e87552346d6b20405558ba53c2c1c7531f55f5 e3b673174dc333952a5d15a4c1c13010ebf1520d1a7e491fd37083392b23cb0d]
	I1228 06:55:12.962631  174872 ssh_runner.go:195] Run: which crictl
	I1228 06:55:12.966921  174872 ssh_runner.go:195] Run: which crictl
	I1228 06:55:12.971369  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:13.025833  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:13.052380  174872 cri.go:83] list returned 5 containers
	I1228 06:55:13.052410  174872 logs.go:282] 0 containers: []
	W1228 06:55:13.052420  174872 logs.go:284] No container was found matching "kube-proxy"
	I1228 06:55:13.052469  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:13.124978  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:13.150390  174872 cri.go:83] list returned 5 containers
	I1228 06:55:13.150425  174872 logs.go:282] 1 containers: [255765325be21023bb69e4f085b53b94ab8a4ecccaa851e2cbb9c042d052b015]
	I1228 06:55:13.150481  174872 ssh_runner.go:195] Run: which crictl
	I1228 06:55:13.155075  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:13.209918  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:13.232833  174872 cri.go:83] list returned 5 containers
	I1228 06:55:13.232861  174872 logs.go:282] 0 containers: []
	W1228 06:55:13.232870  174872 logs.go:284] No container was found matching "kindnet"
	I1228 06:55:13.232917  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:13.300574  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:13.322324  174872 cri.go:83] list returned 5 containers
	I1228 06:55:13.322349  174872 logs.go:282] 0 containers: []
	W1228 06:55:13.322356  174872 logs.go:284] No container was found matching "storage-provisioner"
	I1228 06:55:13.322367  174872 logs.go:123] Gathering logs for kubelet ...
	I1228 06:55:13.322377  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 06:55:13.430383  174872 logs.go:123] Gathering logs for kube-apiserver [48f7e5ef8396b645cf2418dce4f1d1cc2491293b37ed5603ea087c1517820125] ...
	I1228 06:55:13.430421  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48f7e5ef8396b645cf2418dce4f1d1cc2491293b37ed5603ea087c1517820125"
	I1228 06:55:13.470024  174872 logs.go:123] Gathering logs for etcd [c07140d372bde62d2a98606e4f16ca65bd2107b9d582b44d9c3b9964313ff88e] ...
	I1228 06:55:13.470077  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c07140d372bde62d2a98606e4f16ca65bd2107b9d582b44d9c3b9964313ff88e"
	I1228 06:55:13.506220  174872 logs.go:123] Gathering logs for kube-scheduler [1aa14a7d75bca6368dd9bd6a02e87552346d6b20405558ba53c2c1c7531f55f5] ...
	I1228 06:55:13.506255  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1aa14a7d75bca6368dd9bd6a02e87552346d6b20405558ba53c2c1c7531f55f5"
	I1228 06:55:13.597422  174872 logs.go:123] Gathering logs for kube-scheduler [e3b673174dc333952a5d15a4c1c13010ebf1520d1a7e491fd37083392b23cb0d] ...
	I1228 06:55:13.597465  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3b673174dc333952a5d15a4c1c13010ebf1520d1a7e491fd37083392b23cb0d"
	I1228 06:55:13.636726  174872 logs.go:123] Gathering logs for CRI-O ...
	I1228 06:55:13.636756  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1228 06:55:13.692048  174872 logs.go:123] Gathering logs for dmesg ...
	I1228 06:55:13.692083  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 06:55:13.710464  174872 logs.go:123] Gathering logs for describe nodes ...
	I1228 06:55:13.710495  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 06:55:13.775441  174872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 06:55:13.775463  174872 logs.go:123] Gathering logs for kube-controller-manager [255765325be21023bb69e4f085b53b94ab8a4ecccaa851e2cbb9c042d052b015] ...
	I1228 06:55:13.775479  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 255765325be21023bb69e4f085b53b94ab8a4ecccaa851e2cbb9c042d052b015"
	I1228 06:55:13.813118  174872 logs.go:123] Gathering logs for container status ...
	I1228 06:55:13.813160  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 06:55:16.360444  174872 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1228 06:55:16.360909  174872 api_server.go:315] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1228 06:55:16.360987  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:16.410151  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:16.429775  174872 cri.go:83] list returned 5 containers
	I1228 06:55:16.429802  174872 logs.go:282] 1 containers: [48f7e5ef8396b645cf2418dce4f1d1cc2491293b37ed5603ea087c1517820125]
	I1228 06:55:16.429854  174872 ssh_runner.go:195] Run: which crictl
	I1228 06:55:16.434434  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:16.486145  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:16.505250  174872 cri.go:83] list returned 5 containers
	I1228 06:55:16.505277  174872 logs.go:282] 1 containers: [c07140d372bde62d2a98606e4f16ca65bd2107b9d582b44d9c3b9964313ff88e]
	I1228 06:55:16.505329  174872 ssh_runner.go:195] Run: which crictl
	I1228 06:55:16.508867  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:16.559442  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:16.580289  174872 cri.go:83] list returned 5 containers
	I1228 06:55:16.580313  174872 logs.go:282] 0 containers: []
	W1228 06:55:16.580320  174872 logs.go:284] No container was found matching "coredns"
	I1228 06:55:16.580358  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:16.210487  226053 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-694122" is "Ready"
	I1228 06:55:16.210512  226053 pod_ready.go:86] duration metric: took 377.339251ms for pod "kube-controller-manager-old-k8s-version-694122" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:55:16.411981  226053 pod_ready.go:83] waiting for pod "kube-proxy-ckjcc" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:55:16.810577  226053 pod_ready.go:94] pod "kube-proxy-ckjcc" is "Ready"
	I1228 06:55:16.810601  226053 pod_ready.go:86] duration metric: took 398.595119ms for pod "kube-proxy-ckjcc" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:55:17.011450  226053 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-694122" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:55:17.410772  226053 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-694122" is "Ready"
	I1228 06:55:17.410803  226053 pod_ready.go:86] duration metric: took 399.32142ms for pod "kube-scheduler-old-k8s-version-694122" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:55:17.410818  226053 pod_ready.go:40] duration metric: took 1.604603364s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 06:55:17.465307  226053 start.go:625] kubectl: 1.35.0, cluster: 1.28.0 (minor skew: 7)
	I1228 06:55:17.467077  226053 out.go:203] 
	W1228 06:55:17.468285  226053 out.go:285] ! /usr/local/bin/kubectl is version 1.35.0, which may have incompatibilities with Kubernetes 1.28.0.
	I1228 06:55:17.469642  226053 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1228 06:55:17.470887  226053 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-694122" cluster and "default" namespace by default
	I1228 06:55:16.024615  233405 out.go:252]   - Generating certificates and keys ...
	I1228 06:55:16.024743  233405 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1228 06:55:16.024853  233405 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1228 06:55:16.196594  233405 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1228 06:55:16.325318  233405 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1228 06:55:16.476731  233405 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1228 06:55:16.560048  233405 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1228 06:55:16.606277  233405 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1228 06:55:16.606459  233405 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-950460] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1228 06:55:16.689988  233405 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1228 06:55:16.690164  233405 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-950460] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1228 06:55:16.860472  233405 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1228 06:55:17.059903  233405 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1228 06:55:17.084769  233405 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1228 06:55:17.084831  233405 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1228 06:55:17.145217  233405 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1228 06:55:17.369275  233405 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1228 06:55:17.413233  233405 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1228 06:55:17.501840  233405 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1228 06:55:17.670254  233405 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1228 06:55:17.671595  233405 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1228 06:55:17.675204  233405 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1228 06:55:17.677445  233405 out.go:252]   - Booting up control plane ...
	I1228 06:55:17.677550  233405 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1228 06:55:17.677671  233405 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1228 06:55:17.677797  233405 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1228 06:55:17.691587  233405 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1228 06:55:17.691719  233405 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1228 06:55:17.697967  233405 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1228 06:55:17.698338  233405 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1228 06:55:17.698405  233405 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1228 06:55:17.798134  233405 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1228 06:55:17.798335  233405 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1228 06:55:18.301002  233405 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 502.246233ms
	I1228 06:55:18.303873  233405 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1228 06:55:18.304004  233405 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1228 06:55:18.304130  233405 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1228 06:55:18.304250  233405 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1228 06:55:19.309002  233405 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.005074214s
	W1228 06:55:17.165207  231701 pod_ready.go:104] pod "coredns-7d764666f9-vc57t" is not "Ready", error: <nil>
	W1228 06:55:19.664267  231701 pod_ready.go:104] pod "coredns-7d764666f9-vc57t" is not "Ready", error: <nil>
	I1228 06:55:16.630956  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:16.652204  174872 cri.go:83] list returned 5 containers
	I1228 06:55:16.652228  174872 logs.go:282] 2 containers: [1aa14a7d75bca6368dd9bd6a02e87552346d6b20405558ba53c2c1c7531f55f5 e3b673174dc333952a5d15a4c1c13010ebf1520d1a7e491fd37083392b23cb0d]
	I1228 06:55:16.652273  174872 ssh_runner.go:195] Run: which crictl
	I1228 06:55:16.656362  174872 ssh_runner.go:195] Run: which crictl
	I1228 06:55:16.659880  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:16.710477  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:16.729903  174872 cri.go:83] list returned 5 containers
	I1228 06:55:16.729936  174872 logs.go:282] 0 containers: []
	W1228 06:55:16.729946  174872 logs.go:284] No container was found matching "kube-proxy"
	I1228 06:55:16.729991  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:16.776840  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:16.795817  174872 cri.go:83] list returned 5 containers
	I1228 06:55:16.795843  174872 logs.go:282] 1 containers: [255765325be21023bb69e4f085b53b94ab8a4ecccaa851e2cbb9c042d052b015]
	I1228 06:55:16.795914  174872 ssh_runner.go:195] Run: which crictl
	I1228 06:55:16.799614  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:16.854380  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:16.873780  174872 cri.go:83] list returned 5 containers
	I1228 06:55:16.873804  174872 logs.go:282] 0 containers: []
	W1228 06:55:16.873811  174872 logs.go:284] No container was found matching "kindnet"
	I1228 06:55:16.873849  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:16.922340  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:16.941843  174872 cri.go:83] list returned 5 containers
	I1228 06:55:16.941869  174872 logs.go:282] 0 containers: []
	W1228 06:55:16.941876  174872 logs.go:284] No container was found matching "storage-provisioner"
	I1228 06:55:16.941900  174872 logs.go:123] Gathering logs for dmesg ...
	I1228 06:55:16.941914  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 06:55:16.955837  174872 logs.go:123] Gathering logs for etcd [c07140d372bde62d2a98606e4f16ca65bd2107b9d582b44d9c3b9964313ff88e] ...
	I1228 06:55:16.955866  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c07140d372bde62d2a98606e4f16ca65bd2107b9d582b44d9c3b9964313ff88e"
	I1228 06:55:16.989229  174872 logs.go:123] Gathering logs for kube-scheduler [1aa14a7d75bca6368dd9bd6a02e87552346d6b20405558ba53c2c1c7531f55f5] ...
	I1228 06:55:16.989254  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1aa14a7d75bca6368dd9bd6a02e87552346d6b20405558ba53c2c1c7531f55f5"
	I1228 06:55:17.067815  174872 logs.go:123] Gathering logs for kube-scheduler [e3b673174dc333952a5d15a4c1c13010ebf1520d1a7e491fd37083392b23cb0d] ...
	I1228 06:55:17.067853  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3b673174dc333952a5d15a4c1c13010ebf1520d1a7e491fd37083392b23cb0d"
	I1228 06:55:17.104744  174872 logs.go:123] Gathering logs for kube-controller-manager [255765325be21023bb69e4f085b53b94ab8a4ecccaa851e2cbb9c042d052b015] ...
	I1228 06:55:17.104775  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 255765325be21023bb69e4f085b53b94ab8a4ecccaa851e2cbb9c042d052b015"
	I1228 06:55:17.140718  174872 logs.go:123] Gathering logs for container status ...
	I1228 06:55:17.140751  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 06:55:17.181880  174872 logs.go:123] Gathering logs for kubelet ...
	I1228 06:55:17.181918  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 06:55:17.285373  174872 logs.go:123] Gathering logs for describe nodes ...
	I1228 06:55:17.285404  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 06:55:17.341726  174872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 06:55:17.341750  174872 logs.go:123] Gathering logs for kube-apiserver [48f7e5ef8396b645cf2418dce4f1d1cc2491293b37ed5603ea087c1517820125] ...
	I1228 06:55:17.341760  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48f7e5ef8396b645cf2418dce4f1d1cc2491293b37ed5603ea087c1517820125"
	I1228 06:55:17.380310  174872 logs.go:123] Gathering logs for CRI-O ...
	I1228 06:55:17.380340  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1228 06:55:19.956175  174872 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1228 06:55:19.956607  174872 api_server.go:315] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1228 06:55:19.956692  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:20.006132  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:20.030793  174872 cri.go:83] list returned 5 containers
	I1228 06:55:20.030816  174872 logs.go:282] 1 containers: [48f7e5ef8396b645cf2418dce4f1d1cc2491293b37ed5603ea087c1517820125]
	I1228 06:55:20.030860  174872 ssh_runner.go:195] Run: which crictl
	I1228 06:55:20.034627  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:20.100452  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:20.130571  174872 cri.go:83] list returned 5 containers
	I1228 06:55:20.130599  174872 logs.go:282] 1 containers: [c07140d372bde62d2a98606e4f16ca65bd2107b9d582b44d9c3b9964313ff88e]
	I1228 06:55:20.130658  174872 ssh_runner.go:195] Run: which crictl
	I1228 06:55:20.135071  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:20.207611  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:20.235060  174872 cri.go:83] list returned 5 containers
	I1228 06:55:20.235145  174872 logs.go:282] 0 containers: []
	W1228 06:55:20.235179  174872 logs.go:284] No container was found matching "coredns"
	I1228 06:55:20.235245  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:20.286638  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:20.307368  174872 cri.go:83] list returned 5 containers
	I1228 06:55:20.307392  174872 logs.go:282] 2 containers: [1aa14a7d75bca6368dd9bd6a02e87552346d6b20405558ba53c2c1c7531f55f5 e3b673174dc333952a5d15a4c1c13010ebf1520d1a7e491fd37083392b23cb0d]
	I1228 06:55:20.307441  174872 ssh_runner.go:195] Run: which crictl
	I1228 06:55:20.311306  174872 ssh_runner.go:195] Run: which crictl
	I1228 06:55:20.315060  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:20.364328  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:20.385194  174872 cri.go:83] list returned 5 containers
	I1228 06:55:20.385227  174872 logs.go:282] 0 containers: []
	W1228 06:55:20.385234  174872 logs.go:284] No container was found matching "kube-proxy"
	I1228 06:55:20.385276  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:20.437176  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:20.456399  174872 cri.go:83] list returned 5 containers
	I1228 06:55:20.456423  174872 logs.go:282] 1 containers: [255765325be21023bb69e4f085b53b94ab8a4ecccaa851e2cbb9c042d052b015]
	I1228 06:55:20.456476  174872 ssh_runner.go:195] Run: which crictl
	I1228 06:55:20.460290  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:20.511824  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:20.535815  174872 cri.go:83] list returned 5 containers
	I1228 06:55:20.535844  174872 logs.go:282] 0 containers: []
	W1228 06:55:20.535853  174872 logs.go:284] No container was found matching "kindnet"
	I1228 06:55:20.535916  174872 ssh_runner.go:195] Run: sudo crio config
	I1228 06:55:20.585601  174872 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:55:20.607604  174872 cri.go:83] list returned 5 containers
	I1228 06:55:20.607634  174872 logs.go:282] 0 containers: []
	W1228 06:55:20.607644  174872 logs.go:284] No container was found matching "storage-provisioner"
	I1228 06:55:20.607657  174872 logs.go:123] Gathering logs for kubelet ...
	I1228 06:55:20.607670  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 06:55:20.727658  174872 logs.go:123] Gathering logs for describe nodes ...
	I1228 06:55:20.727691  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 06:55:20.783740  174872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 06:55:20.783758  174872 logs.go:123] Gathering logs for kube-apiserver [48f7e5ef8396b645cf2418dce4f1d1cc2491293b37ed5603ea087c1517820125] ...
	I1228 06:55:20.783769  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48f7e5ef8396b645cf2418dce4f1d1cc2491293b37ed5603ea087c1517820125"
	I1228 06:55:20.820746  174872 logs.go:123] Gathering logs for etcd [c07140d372bde62d2a98606e4f16ca65bd2107b9d582b44d9c3b9964313ff88e] ...
	I1228 06:55:20.820773  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c07140d372bde62d2a98606e4f16ca65bd2107b9d582b44d9c3b9964313ff88e"
	I1228 06:55:20.854570  174872 logs.go:123] Gathering logs for kube-scheduler [1aa14a7d75bca6368dd9bd6a02e87552346d6b20405558ba53c2c1c7531f55f5] ...
	I1228 06:55:20.854597  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1aa14a7d75bca6368dd9bd6a02e87552346d6b20405558ba53c2c1c7531f55f5"
	I1228 06:55:20.931607  174872 logs.go:123] Gathering logs for kube-scheduler [e3b673174dc333952a5d15a4c1c13010ebf1520d1a7e491fd37083392b23cb0d] ...
	I1228 06:55:20.931639  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3b673174dc333952a5d15a4c1c13010ebf1520d1a7e491fd37083392b23cb0d"
	I1228 06:55:20.966047  174872 logs.go:123] Gathering logs for CRI-O ...
	I1228 06:55:20.966079  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1228 06:55:21.019471  174872 logs.go:123] Gathering logs for container status ...
	I1228 06:55:21.019506  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1228 06:55:21.058209  174872 logs.go:123] Gathering logs for dmesg ...
	I1228 06:55:21.058236  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 06:55:21.073397  174872 logs.go:123] Gathering logs for kube-controller-manager [255765325be21023bb69e4f085b53b94ab8a4ecccaa851e2cbb9c042d052b015] ...
	I1228 06:55:21.073428  174872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 255765325be21023bb69e4f085b53b94ab8a4ecccaa851e2cbb9c042d052b015"
	I1228 06:55:20.238221  233405 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.934289215s
	I1228 06:55:21.810585  233405 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.503681535s
	I1228 06:55:21.824263  233405 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1228 06:55:21.833563  233405 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1228 06:55:21.842686  233405 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1228 06:55:21.842913  233405 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-950460 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1228 06:55:21.850295  233405 kubeadm.go:319] [bootstrap-token] Using token: 3tit08.9647mdo2tefyyeox
	I1228 06:55:21.851599  233405 out.go:252]   - Configuring RBAC rules ...
	I1228 06:55:21.851743  233405 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1228 06:55:21.854322  233405 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1228 06:55:21.859093  233405 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1228 06:55:21.861236  233405 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1228 06:55:21.864558  233405 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1228 06:55:21.866745  233405 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1228 06:55:22.214777  233405 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1228 06:55:22.626870  233405 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1228 06:55:23.214409  233405 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1228 06:55:23.215513  233405 kubeadm.go:319] 
	I1228 06:55:23.215589  233405 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1228 06:55:23.215597  233405 kubeadm.go:319] 
	I1228 06:55:23.215662  233405 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1228 06:55:23.215669  233405 kubeadm.go:319] 
	I1228 06:55:23.215696  233405 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1228 06:55:23.215747  233405 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1228 06:55:23.215798  233405 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1228 06:55:23.215804  233405 kubeadm.go:319] 
	I1228 06:55:23.215857  233405 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1228 06:55:23.215864  233405 kubeadm.go:319] 
	I1228 06:55:23.215908  233405 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1228 06:55:23.215914  233405 kubeadm.go:319] 
	I1228 06:55:23.215965  233405 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1228 06:55:23.216098  233405 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1228 06:55:23.216191  233405 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1228 06:55:23.216198  233405 kubeadm.go:319] 
	I1228 06:55:23.216280  233405 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1228 06:55:23.216354  233405 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1228 06:55:23.216361  233405 kubeadm.go:319] 
	I1228 06:55:23.216450  233405 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 3tit08.9647mdo2tefyyeox \
	I1228 06:55:23.216554  233405 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6534497fd09654e1c9f62bf7a6763f446292593a08619861d4eab5a65759d2d4 \
	I1228 06:55:23.216577  233405 kubeadm.go:319] 	--control-plane 
	I1228 06:55:23.216581  233405 kubeadm.go:319] 
	I1228 06:55:23.216686  233405 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1228 06:55:23.216698  233405 kubeadm.go:319] 
	I1228 06:55:23.216801  233405 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 3tit08.9647mdo2tefyyeox \
	I1228 06:55:23.216954  233405 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6534497fd09654e1c9f62bf7a6763f446292593a08619861d4eab5a65759d2d4 
	I1228 06:55:23.219264  233405 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1228 06:55:23.219448  233405 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1228 06:55:23.219480  233405 cni.go:84] Creating CNI manager for ""
	I1228 06:55:23.219494  233405 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1228 06:55:23.220966  233405 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1228 06:55:23.222133  233405 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1228 06:55:23.226580  233405 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1228 06:55:23.226600  233405 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1228 06:55:23.239637  233405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1228 06:55:23.442596  233405 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1228 06:55:23.442663  233405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:55:23.442699  233405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-950460 minikube.k8s.io/updated_at=2025_12_28T06_55_23_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba minikube.k8s.io/name=no-preload-950460 minikube.k8s.io/primary=true
	I1228 06:55:23.527340  233405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:55:23.527342  233405 ops.go:34] apiserver oom_adj: -16
	I1228 06:55:24.028084  233405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:55:24.528246  233405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:55:25.028077  233405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1228 06:55:21.665388  231701 pod_ready.go:104] pod "coredns-7d764666f9-vc57t" is not "Ready", error: <nil>
	W1228 06:55:24.164656  231701 pod_ready.go:104] pod "coredns-7d764666f9-vc57t" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 28 06:55:15 old-k8s-version-694122 crio[773]: time="2025-12-28T06:55:15.078223003Z" level=info msg="Starting container: e4313ea17a8b4ef1cef4b80b3c96564ac34c8aa301b89496710cefabd213f3b4" id=e029e3e4-22d6-40dc-a859-90bbc10dd6f2 name=/runtime.v1.RuntimeService/StartContainer
	Dec 28 06:55:15 old-k8s-version-694122 crio[773]: time="2025-12-28T06:55:15.079954807Z" level=info msg="Started container" PID=2202 containerID=e4313ea17a8b4ef1cef4b80b3c96564ac34c8aa301b89496710cefabd213f3b4 description=kube-system/coredns-5dd5756b68-f75js/coredns id=e029e3e4-22d6-40dc-a859-90bbc10dd6f2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d1c5487f6cb3584e60fd65fb2d4cf705b0290d5ca49d764b9d158bb39e131239
	Dec 28 06:55:17 old-k8s-version-694122 crio[773]: time="2025-12-28T06:55:17.924920334Z" level=info msg="Running pod sandbox: default/busybox/POD" id=d97a819e-10e2-4dd2-947d-9a08409f41b2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 28 06:55:17 old-k8s-version-694122 crio[773]: time="2025-12-28T06:55:17.925015524Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:55:17 old-k8s-version-694122 crio[773]: time="2025-12-28T06:55:17.929813359Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:63d7545a10b4c0165b3febbc20d0097345f415c72b0dad28313a9ab1d08a5518 UID:61d32e53-cd45-4fce-a261-3b03793d8472 NetNS:/var/run/netns/54bd4b4c-a65a-4bfe-9eb6-b33fa99aa83e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000516df8}] Aliases:map[]}"
	Dec 28 06:55:17 old-k8s-version-694122 crio[773]: time="2025-12-28T06:55:17.929840118Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 28 06:55:17 old-k8s-version-694122 crio[773]: time="2025-12-28T06:55:17.945796433Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:63d7545a10b4c0165b3febbc20d0097345f415c72b0dad28313a9ab1d08a5518 UID:61d32e53-cd45-4fce-a261-3b03793d8472 NetNS:/var/run/netns/54bd4b4c-a65a-4bfe-9eb6-b33fa99aa83e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000516df8}] Aliases:map[]}"
	Dec 28 06:55:17 old-k8s-version-694122 crio[773]: time="2025-12-28T06:55:17.945915095Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 28 06:55:17 old-k8s-version-694122 crio[773]: time="2025-12-28T06:55:17.946712259Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 28 06:55:17 old-k8s-version-694122 crio[773]: time="2025-12-28T06:55:17.947488689Z" level=info msg="Ran pod sandbox 63d7545a10b4c0165b3febbc20d0097345f415c72b0dad28313a9ab1d08a5518 with infra container: default/busybox/POD" id=d97a819e-10e2-4dd2-947d-9a08409f41b2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 28 06:55:17 old-k8s-version-694122 crio[773]: time="2025-12-28T06:55:17.94870101Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=92706b58-817b-47a0-ab56-6785cbe102f1 name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:55:17 old-k8s-version-694122 crio[773]: time="2025-12-28T06:55:17.948836213Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=92706b58-817b-47a0-ab56-6785cbe102f1 name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:55:17 old-k8s-version-694122 crio[773]: time="2025-12-28T06:55:17.948950314Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=92706b58-817b-47a0-ab56-6785cbe102f1 name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:55:17 old-k8s-version-694122 crio[773]: time="2025-12-28T06:55:17.949516381Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ee774e68-4c28-4e02-a3d7-d77477cb8876 name=/runtime.v1.ImageService/PullImage
	Dec 28 06:55:17 old-k8s-version-694122 crio[773]: time="2025-12-28T06:55:17.949828061Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 28 06:55:19 old-k8s-version-694122 crio[773]: time="2025-12-28T06:55:19.176594734Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=ee774e68-4c28-4e02-a3d7-d77477cb8876 name=/runtime.v1.ImageService/PullImage
	Dec 28 06:55:19 old-k8s-version-694122 crio[773]: time="2025-12-28T06:55:19.177502094Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ad004bdc-e6b8-4823-948d-6b0438f9e398 name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:55:19 old-k8s-version-694122 crio[773]: time="2025-12-28T06:55:19.179135631Z" level=info msg="Creating container: default/busybox/busybox" id=cfda6584-ae69-445d-b75b-a68e2b90be14 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:55:19 old-k8s-version-694122 crio[773]: time="2025-12-28T06:55:19.17931003Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:55:19 old-k8s-version-694122 crio[773]: time="2025-12-28T06:55:19.18325756Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:55:19 old-k8s-version-694122 crio[773]: time="2025-12-28T06:55:19.183736973Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:55:19 old-k8s-version-694122 crio[773]: time="2025-12-28T06:55:19.213639067Z" level=info msg="Created container b5c1c4500d89a9ede4b9e57fac1ffdff7f944324737305ade4eecd8ba03d7eac: default/busybox/busybox" id=cfda6584-ae69-445d-b75b-a68e2b90be14 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:55:19 old-k8s-version-694122 crio[773]: time="2025-12-28T06:55:19.214278772Z" level=info msg="Starting container: b5c1c4500d89a9ede4b9e57fac1ffdff7f944324737305ade4eecd8ba03d7eac" id=d3f6a12d-a87e-418a-bc06-a50599063bdd name=/runtime.v1.RuntimeService/StartContainer
	Dec 28 06:55:19 old-k8s-version-694122 crio[773]: time="2025-12-28T06:55:19.215980208Z" level=info msg="Started container" PID=2272 containerID=b5c1c4500d89a9ede4b9e57fac1ffdff7f944324737305ade4eecd8ba03d7eac description=default/busybox/busybox id=d3f6a12d-a87e-418a-bc06-a50599063bdd name=/runtime.v1.RuntimeService/StartContainer sandboxID=63d7545a10b4c0165b3febbc20d0097345f415c72b0dad28313a9ab1d08a5518
	Dec 28 06:55:25 old-k8s-version-694122 crio[773]: time="2025-12-28T06:55:25.707793187Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	b5c1c4500d89a       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   63d7545a10b4c       busybox                                          default
	e4313ea17a8b4       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      12 seconds ago      Running             coredns                   0                   d1c5487f6cb35       coredns-5dd5756b68-f75js                         kube-system
	2f42369938cad       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   f48b61daca4b9       storage-provisioner                              kube-system
	bdf6d3d466b21       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    23 seconds ago      Running             kindnet-cni               0                   ff7095940fe8a       kindnet-v7rhd                                    kube-system
	3355337c6e33d       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      25 seconds ago      Running             kube-proxy                0                   ae63df3c38013       kube-proxy-ckjcc                                 kube-system
	3e609ad98de24       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      44 seconds ago      Running             kube-apiserver            0                   00f3b8476b893       kube-apiserver-old-k8s-version-694122            kube-system
	ad52570fbbeb7       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      44 seconds ago      Running             kube-controller-manager   0                   11e769ffd8af0       kube-controller-manager-old-k8s-version-694122   kube-system
	66601cd761792       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      44 seconds ago      Running             etcd                      0                   51d8de1890be9       etcd-old-k8s-version-694122                      kube-system
	7a2d2ed18d9e5       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      44 seconds ago      Running             kube-scheduler            0                   0f2026232073f       kube-scheduler-old-k8s-version-694122            kube-system
	
	
	==> describe nodes <==
	Name:               old-k8s-version-694122
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-694122
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba
	                    minikube.k8s.io/name=old-k8s-version-694122
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_28T06_54_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Dec 2025 06:54:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-694122
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 28 Dec 2025 06:55:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 28 Dec 2025 06:55:18 +0000   Sun, 28 Dec 2025 06:54:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 28 Dec 2025 06:55:18 +0000   Sun, 28 Dec 2025 06:54:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 28 Dec 2025 06:55:18 +0000   Sun, 28 Dec 2025 06:54:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 28 Dec 2025 06:55:18 +0000   Sun, 28 Dec 2025 06:55:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-694122
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 493159aea3d8b8768b108b926950835d
	  System UUID:                65f3a296-84a7-49ed-b5c2-55741073e206
	  Boot ID:                    e7a1d175-ccf2-4135-b9c7-3a9f70f4c4af
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-5dd5756b68-f75js                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-old-k8s-version-694122                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         40s
	  kube-system                 kindnet-v7rhd                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-old-k8s-version-694122             250m (3%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-controller-manager-old-k8s-version-694122    200m (2%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-proxy-ckjcc                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-old-k8s-version-694122             100m (1%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25s                kube-proxy       
	  Normal  Starting                 45s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  45s (x9 over 45s)  kubelet          Node old-k8s-version-694122 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    45s (x8 over 45s)  kubelet          Node old-k8s-version-694122 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     45s (x7 over 45s)  kubelet          Node old-k8s-version-694122 status is now: NodeHasSufficientPID
	  Normal  Starting                 40s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s                kubelet          Node old-k8s-version-694122 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s                kubelet          Node old-k8s-version-694122 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s                kubelet          Node old-k8s-version-694122 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s                node-controller  Node old-k8s-version-694122 event: Registered Node old-k8s-version-694122 in Controller
	  Normal  NodeReady                13s                kubelet          Node old-k8s-version-694122 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec28 06:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001811] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000998] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.386099] i8042: Warning: Keylock active
	[  +0.010472] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.485785] block sda: the capability attribute has been deprecated.
	[  +0.082391] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024584] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.071522] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 06:55:27 up 37 min,  0 user,  load average: 2.99, 2.41, 1.56
	Linux old-k8s-version-694122 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 28 06:55:00 old-k8s-version-694122 kubelet[1410]: I1228 06:55:00.817251    1410 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 28 06:55:01 old-k8s-version-694122 kubelet[1410]: I1228 06:55:01.603697    1410 topology_manager.go:215] "Topology Admit Handler" podUID="1e2636ad-f5a4-4488-bb26-9f14615b487e" podNamespace="kube-system" podName="kube-proxy-ckjcc"
	Dec 28 06:55:01 old-k8s-version-694122 kubelet[1410]: I1228 06:55:01.613920    1410 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e2636ad-f5a4-4488-bb26-9f14615b487e-lib-modules\") pod \"kube-proxy-ckjcc\" (UID: \"1e2636ad-f5a4-4488-bb26-9f14615b487e\") " pod="kube-system/kube-proxy-ckjcc"
	Dec 28 06:55:01 old-k8s-version-694122 kubelet[1410]: I1228 06:55:01.614501    1410 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1e2636ad-f5a4-4488-bb26-9f14615b487e-kube-proxy\") pod \"kube-proxy-ckjcc\" (UID: \"1e2636ad-f5a4-4488-bb26-9f14615b487e\") " pod="kube-system/kube-proxy-ckjcc"
	Dec 28 06:55:01 old-k8s-version-694122 kubelet[1410]: I1228 06:55:01.614545    1410 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm8b4\" (UniqueName: \"kubernetes.io/projected/1e2636ad-f5a4-4488-bb26-9f14615b487e-kube-api-access-mm8b4\") pod \"kube-proxy-ckjcc\" (UID: \"1e2636ad-f5a4-4488-bb26-9f14615b487e\") " pod="kube-system/kube-proxy-ckjcc"
	Dec 28 06:55:01 old-k8s-version-694122 kubelet[1410]: I1228 06:55:01.615232    1410 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e2636ad-f5a4-4488-bb26-9f14615b487e-xtables-lock\") pod \"kube-proxy-ckjcc\" (UID: \"1e2636ad-f5a4-4488-bb26-9f14615b487e\") " pod="kube-system/kube-proxy-ckjcc"
	Dec 28 06:55:01 old-k8s-version-694122 kubelet[1410]: I1228 06:55:01.629415    1410 topology_manager.go:215] "Topology Admit Handler" podUID="0b1ff99b-7679-44db-834b-87265079d9b1" podNamespace="kube-system" podName="kindnet-v7rhd"
	Dec 28 06:55:01 old-k8s-version-694122 kubelet[1410]: I1228 06:55:01.720054    1410 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0b1ff99b-7679-44db-834b-87265079d9b1-cni-cfg\") pod \"kindnet-v7rhd\" (UID: \"0b1ff99b-7679-44db-834b-87265079d9b1\") " pod="kube-system/kindnet-v7rhd"
	Dec 28 06:55:01 old-k8s-version-694122 kubelet[1410]: I1228 06:55:01.720154    1410 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czg28\" (UniqueName: \"kubernetes.io/projected/0b1ff99b-7679-44db-834b-87265079d9b1-kube-api-access-czg28\") pod \"kindnet-v7rhd\" (UID: \"0b1ff99b-7679-44db-834b-87265079d9b1\") " pod="kube-system/kindnet-v7rhd"
	Dec 28 06:55:01 old-k8s-version-694122 kubelet[1410]: I1228 06:55:01.720227    1410 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b1ff99b-7679-44db-834b-87265079d9b1-xtables-lock\") pod \"kindnet-v7rhd\" (UID: \"0b1ff99b-7679-44db-834b-87265079d9b1\") " pod="kube-system/kindnet-v7rhd"
	Dec 28 06:55:01 old-k8s-version-694122 kubelet[1410]: I1228 06:55:01.720276    1410 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b1ff99b-7679-44db-834b-87265079d9b1-lib-modules\") pod \"kindnet-v7rhd\" (UID: \"0b1ff99b-7679-44db-834b-87265079d9b1\") " pod="kube-system/kindnet-v7rhd"
	Dec 28 06:55:02 old-k8s-version-694122 kubelet[1410]: I1228 06:55:02.695359    1410 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-ckjcc" podStartSLOduration=1.695311606 podCreationTimestamp="2025-12-28 06:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-28 06:55:02.694918102 +0000 UTC m=+15.151762884" watchObservedRunningTime="2025-12-28 06:55:02.695311606 +0000 UTC m=+15.152156388"
	Dec 28 06:55:04 old-k8s-version-694122 kubelet[1410]: I1228 06:55:04.762922    1410 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-v7rhd" podStartSLOduration=2.054087896 podCreationTimestamp="2025-12-28 06:55:01 +0000 UTC" firstStartedPulling="2025-12-28 06:55:01.942146415 +0000 UTC m=+14.398991177" lastFinishedPulling="2025-12-28 06:55:03.650927418 +0000 UTC m=+16.107772191" observedRunningTime="2025-12-28 06:55:04.762579503 +0000 UTC m=+17.219424284" watchObservedRunningTime="2025-12-28 06:55:04.76286891 +0000 UTC m=+17.219713690"
	Dec 28 06:55:14 old-k8s-version-694122 kubelet[1410]: I1228 06:55:14.257233    1410 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 28 06:55:14 old-k8s-version-694122 kubelet[1410]: I1228 06:55:14.508017    1410 topology_manager.go:215] "Topology Admit Handler" podUID="1ac23a0e-e11c-4689-880a-1d7501b8178f" podNamespace="kube-system" podName="storage-provisioner"
	Dec 28 06:55:14 old-k8s-version-694122 kubelet[1410]: I1228 06:55:14.512088    1410 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1ac23a0e-e11c-4689-880a-1d7501b8178f-tmp\") pod \"storage-provisioner\" (UID: \"1ac23a0e-e11c-4689-880a-1d7501b8178f\") " pod="kube-system/storage-provisioner"
	Dec 28 06:55:14 old-k8s-version-694122 kubelet[1410]: I1228 06:55:14.512147    1410 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxwsd\" (UniqueName: \"kubernetes.io/projected/1ac23a0e-e11c-4689-880a-1d7501b8178f-kube-api-access-bxwsd\") pod \"storage-provisioner\" (UID: \"1ac23a0e-e11c-4689-880a-1d7501b8178f\") " pod="kube-system/storage-provisioner"
	Dec 28 06:55:14 old-k8s-version-694122 kubelet[1410]: I1228 06:55:14.621592    1410 topology_manager.go:215] "Topology Admit Handler" podUID="90c72704-97b2-410e-b66d-bf7b621758a3" podNamespace="kube-system" podName="coredns-5dd5756b68-f75js"
	Dec 28 06:55:14 old-k8s-version-694122 kubelet[1410]: I1228 06:55:14.713946    1410 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90c72704-97b2-410e-b66d-bf7b621758a3-config-volume\") pod \"coredns-5dd5756b68-f75js\" (UID: \"90c72704-97b2-410e-b66d-bf7b621758a3\") " pod="kube-system/coredns-5dd5756b68-f75js"
	Dec 28 06:55:14 old-k8s-version-694122 kubelet[1410]: I1228 06:55:14.714011    1410 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44khp\" (UniqueName: \"kubernetes.io/projected/90c72704-97b2-410e-b66d-bf7b621758a3-kube-api-access-44khp\") pod \"coredns-5dd5756b68-f75js\" (UID: \"90c72704-97b2-410e-b66d-bf7b621758a3\") " pod="kube-system/coredns-5dd5756b68-f75js"
	Dec 28 06:55:15 old-k8s-version-694122 kubelet[1410]: I1228 06:55:15.719757    1410 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-f75js" podStartSLOduration=14.719706725 podCreationTimestamp="2025-12-28 06:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-28 06:55:15.719527806 +0000 UTC m=+28.176372589" watchObservedRunningTime="2025-12-28 06:55:15.719706725 +0000 UTC m=+28.176551507"
	Dec 28 06:55:15 old-k8s-version-694122 kubelet[1410]: I1228 06:55:15.728953    1410 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.728906938 podCreationTimestamp="2025-12-28 06:55:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-28 06:55:15.728858395 +0000 UTC m=+28.185703177" watchObservedRunningTime="2025-12-28 06:55:15.728906938 +0000 UTC m=+28.185751720"
	Dec 28 06:55:17 old-k8s-version-694122 kubelet[1410]: I1228 06:55:17.622761    1410 topology_manager.go:215] "Topology Admit Handler" podUID="61d32e53-cd45-4fce-a261-3b03793d8472" podNamespace="default" podName="busybox"
	Dec 28 06:55:17 old-k8s-version-694122 kubelet[1410]: I1228 06:55:17.632012    1410 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcqxg\" (UniqueName: \"kubernetes.io/projected/61d32e53-cd45-4fce-a261-3b03793d8472-kube-api-access-lcqxg\") pod \"busybox\" (UID: \"61d32e53-cd45-4fce-a261-3b03793d8472\") " pod="default/busybox"
	Dec 28 06:55:19 old-k8s-version-694122 kubelet[1410]: I1228 06:55:19.729628    1410 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.5017883319999998 podCreationTimestamp="2025-12-28 06:55:17 +0000 UTC" firstStartedPulling="2025-12-28 06:55:17.949153959 +0000 UTC m=+30.405998719" lastFinishedPulling="2025-12-28 06:55:19.176939032 +0000 UTC m=+31.633783795" observedRunningTime="2025-12-28 06:55:19.729175874 +0000 UTC m=+32.186020656" watchObservedRunningTime="2025-12-28 06:55:19.729573408 +0000 UTC m=+32.186418189"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1228 06:55:26.787574  240326 logs.go:279] Failed to list containers for "kube-apiserver": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:55:26Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:55:26.851975  240326 logs.go:279] Failed to list containers for "etcd": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:55:26Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:55:26.916785  240326 logs.go:279] Failed to list containers for "coredns": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:55:26Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:55:26.986291  240326 logs.go:279] Failed to list containers for "kube-scheduler": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:55:26Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:55:27.059817  240326 logs.go:279] Failed to list containers for "kube-proxy": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:55:27Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:55:27.129575  240326 logs.go:279] Failed to list containers for "kube-controller-manager": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:55:27Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:55:27.199013  240326 logs.go:279] Failed to list containers for "kindnet": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:55:27Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:55:27.266778  240326 logs.go:279] Failed to list containers for "storage-provisioner": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:55:27Z" level=error msg="open /run/runc: no such file or directory"

                                                
                                                
** /stderr **
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-694122 -n old-k8s-version-694122
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-694122 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-950460 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-950460 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (312.269917ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:55:53Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-950460 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-950460 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-950460 describe deploy/metrics-server -n kube-system: exit status 1 (96.465218ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-950460 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-950460
helpers_test.go:244: (dbg) docker inspect no-preload-950460:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7db017036a6f30a171f925d59009395ab52e0e628d6007614a4cc984fdf39137",
	        "Created": "2025-12-28T06:55:00.893625015Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 234115,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-28T06:55:00.928085866Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8b8cccb9afb2a57c3d011fcf33e0403b1551aa7036e30b12a395646869801935",
	        "ResolvConfPath": "/var/lib/docker/containers/7db017036a6f30a171f925d59009395ab52e0e628d6007614a4cc984fdf39137/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7db017036a6f30a171f925d59009395ab52e0e628d6007614a4cc984fdf39137/hostname",
	        "HostsPath": "/var/lib/docker/containers/7db017036a6f30a171f925d59009395ab52e0e628d6007614a4cc984fdf39137/hosts",
	        "LogPath": "/var/lib/docker/containers/7db017036a6f30a171f925d59009395ab52e0e628d6007614a4cc984fdf39137/7db017036a6f30a171f925d59009395ab52e0e628d6007614a4cc984fdf39137-json.log",
	        "Name": "/no-preload-950460",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-950460:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-950460",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7db017036a6f30a171f925d59009395ab52e0e628d6007614a4cc984fdf39137",
	                "LowerDir": "/var/lib/docker/overlay2/054301f245be985309742daf824fbdce12364ee376445d3bf62cf3ee351edbca-init/diff:/var/lib/docker/overlay2/69e554713d6cc3cb33e7ea5f93430536a8ca0db38320574d3719c26f00b2f62c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/054301f245be985309742daf824fbdce12364ee376445d3bf62cf3ee351edbca/merged",
	                "UpperDir": "/var/lib/docker/overlay2/054301f245be985309742daf824fbdce12364ee376445d3bf62cf3ee351edbca/diff",
	                "WorkDir": "/var/lib/docker/overlay2/054301f245be985309742daf824fbdce12364ee376445d3bf62cf3ee351edbca/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-950460",
	                "Source": "/var/lib/docker/volumes/no-preload-950460/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-950460",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-950460",
	                "name.minikube.sigs.k8s.io": "no-preload-950460",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "ce396773897d6d115a8e89c480d892e72f1602f9bb50a28d32a1acceb21e097f",
	            "SandboxKey": "/var/run/docker/netns/ce396773897d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-950460": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "17f0dbca0a318f9427a146748bffe1e85820955f787d11210b299ebcf405441e",
	                    "EndpointID": "a05a5d3379b66f87c70efa809e7819ecf0236d26b6ec7dc05a901a4b01fd3ba4",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "5a:9e:81:dc:2c:1c",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-950460",
	                        "7db017036a6f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-950460 -n no-preload-950460
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-950460 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-950460 logs -n 25: (1.541528645s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p kubernetes-upgrade-450365 --alsologtostderr                                                                                                                                                                                                │ kubernetes-upgrade-450365    │ jenkins │ v1.37.0 │ 28 Dec 25 06:53 UTC │ 28 Dec 25 06:53 UTC │
	│ start   │ -p kubernetes-upgrade-450365 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-450365    │ jenkins │ v1.37.0 │ 28 Dec 25 06:53 UTC │ 28 Dec 25 06:54 UTC │
	│ delete  │ -p missing-upgrade-937201                                                                                                                                                                                                                     │ missing-upgrade-937201       │ jenkins │ v1.37.0 │ 28 Dec 25 06:53 UTC │ 28 Dec 25 06:53 UTC │
	│ start   │ -p test-preload-785573 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio                                                                                                                  │ test-preload-785573          │ jenkins │ v1.37.0 │ 28 Dec 25 06:53 UTC │ 28 Dec 25 06:54 UTC │
	│ start   │ -p kubernetes-upgrade-450365 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-450365    │ jenkins │ v1.37.0 │ 28 Dec 25 06:54 UTC │                     │
	│ start   │ -p kubernetes-upgrade-450365 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-450365    │ jenkins │ v1.37.0 │ 28 Dec 25 06:54 UTC │ 28 Dec 25 06:54 UTC │
	│ delete  │ -p kubernetes-upgrade-450365                                                                                                                                                                                                                  │ kubernetes-upgrade-450365    │ jenkins │ v1.37.0 │ 28 Dec 25 06:54 UTC │ 28 Dec 25 06:54 UTC │
	│ start   │ -p old-k8s-version-694122 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:54 UTC │ 28 Dec 25 06:55 UTC │
	│ image   │ test-preload-785573 image pull ghcr.io/medyagh/image-mirrors/busybox:latest                                                                                                                                                                   │ test-preload-785573          │ jenkins │ v1.37.0 │ 28 Dec 25 06:54 UTC │ 28 Dec 25 06:54 UTC │
	│ stop    │ -p test-preload-785573                                                                                                                                                                                                                        │ test-preload-785573          │ jenkins │ v1.37.0 │ 28 Dec 25 06:54 UTC │ 28 Dec 25 06:54 UTC │
	│ start   │ -p cert-expiration-623987 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-623987       │ jenkins │ v1.37.0 │ 28 Dec 25 06:54 UTC │ 28 Dec 25 06:54 UTC │
	│ start   │ -p test-preload-785573 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio                                                                                                                            │ test-preload-785573          │ jenkins │ v1.37.0 │ 28 Dec 25 06:54 UTC │ 28 Dec 25 06:55 UTC │
	│ delete  │ -p cert-expiration-623987                                                                                                                                                                                                                     │ cert-expiration-623987       │ jenkins │ v1.37.0 │ 28 Dec 25 06:54 UTC │ 28 Dec 25 06:54 UTC │
	│ start   │ -p no-preload-950460 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-694122 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │                     │
	│ stop    │ -p old-k8s-version-694122 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-694122 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ start   │ -p old-k8s-version-694122 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │                     │
	│ image   │ test-preload-785573 image list                                                                                                                                                                                                                │ test-preload-785573          │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ delete  │ -p test-preload-785573                                                                                                                                                                                                                        │ test-preload-785573          │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ start   │ -p embed-certs-422591 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │                     │
	│ delete  │ -p stopped-upgrade-416029                                                                                                                                                                                                                     │ stopped-upgrade-416029       │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ delete  │ -p disable-driver-mounts-719168                                                                                                                                                                                                               │ disable-driver-mounts-719168 │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ start   │ -p default-k8s-diff-port-500581 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-950460 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 06:55:51
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 06:55:51.753845  247213 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:55:51.754160  247213 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:55:51.754174  247213 out.go:374] Setting ErrFile to fd 2...
	I1228 06:55:51.754180  247213 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:55:51.754468  247213 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:55:51.755111  247213 out.go:368] Setting JSON to false
	I1228 06:55:51.756387  247213 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2304,"bootTime":1766902648,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1228 06:55:51.756443  247213 start.go:143] virtualization: kvm guest
	I1228 06:55:51.758550  247213 out.go:179] * [default-k8s-diff-port-500581] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1228 06:55:51.759811  247213 notify.go:221] Checking for updates...
	I1228 06:55:51.759829  247213 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 06:55:51.761196  247213 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 06:55:51.762672  247213 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:55:51.764436  247213 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-5550/.minikube
	I1228 06:55:51.765587  247213 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1228 06:55:51.766716  247213 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 06:55:51.768207  247213 config.go:182] Loaded profile config "embed-certs-422591": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:55:51.768297  247213 config.go:182] Loaded profile config "no-preload-950460": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:55:51.768376  247213 config.go:182] Loaded profile config "old-k8s-version-694122": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1228 06:55:51.768458  247213 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 06:55:51.798481  247213 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1228 06:55:51.798583  247213 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:55:51.865048  247213 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-28 06:55:51.852983181 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:55:51.865220  247213 docker.go:319] overlay module found
	I1228 06:55:51.868578  247213 out.go:179] * Using the docker driver based on user configuration
	I1228 06:55:51.869788  247213 start.go:309] selected driver: docker
	I1228 06:55:51.869806  247213 start.go:928] validating driver "docker" against <nil>
	I1228 06:55:51.869819  247213 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 06:55:51.870450  247213 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:55:51.930208  247213 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-28 06:55:51.92029318 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:55:51.930364  247213 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1228 06:55:51.930565  247213 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 06:55:51.932191  247213 out.go:179] * Using Docker driver with root privileges
	I1228 06:55:51.933356  247213 cni.go:84] Creating CNI manager for ""
	I1228 06:55:51.933433  247213 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1228 06:55:51.933449  247213 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1228 06:55:51.933524  247213 start.go:353] cluster config:
	{Name:default-k8s-diff-port-500581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-500581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:55:51.934737  247213 out.go:179] * Starting "default-k8s-diff-port-500581" primary control-plane node in "default-k8s-diff-port-500581" cluster
	I1228 06:55:51.935691  247213 cache.go:134] Beginning downloading kic base image for docker with crio
	I1228 06:55:51.936782  247213 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 06:55:51.940156  247213 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1228 06:55:51.940187  247213 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22352-5550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1228 06:55:51.940200  247213 cache.go:65] Caching tarball of preloaded images
	I1228 06:55:51.940256  247213 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 06:55:51.940270  247213 preload.go:251] Found /home/jenkins/minikube-integration/22352-5550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1228 06:55:51.940279  247213 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1228 06:55:51.940356  247213 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/default-k8s-diff-port-500581/config.json ...
	I1228 06:55:51.940378  247213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/default-k8s-diff-port-500581/config.json: {Name:mkd327c3cf080db6d05a38ff17192defa39a8dfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:55:51.963359  247213 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 06:55:51.963384  247213 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
	I1228 06:55:51.963398  247213 cache.go:243] Successfully downloaded all kic artifacts
	I1228 06:55:51.963435  247213 start.go:360] acquireMachinesLock for default-k8s-diff-port-500581: {Name:mk09ab6a942c8bf16d457c533e6be9200b317247 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:55:51.963537  247213 start.go:364] duration metric: took 86.369µs to acquireMachinesLock for "default-k8s-diff-port-500581"
	I1228 06:55:51.963600  247213 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-500581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-500581 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1228 06:55:51.963682  247213 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Dec 28 06:55:41 no-preload-950460 crio[765]: time="2025-12-28T06:55:41.302840734Z" level=info msg="Starting container: 5ec75da949282722097d1798f2a1f5aeda60a198487e8db482be53455f4706f1" id=f08aed84-c9e5-4f93-a23a-0643e294f716 name=/runtime.v1.RuntimeService/StartContainer
	Dec 28 06:55:41 no-preload-950460 crio[765]: time="2025-12-28T06:55:41.305297373Z" level=info msg="Started container" PID=2835 containerID=5ec75da949282722097d1798f2a1f5aeda60a198487e8db482be53455f4706f1 description=kube-system/coredns-7d764666f9-npk6g/coredns id=f08aed84-c9e5-4f93-a23a-0643e294f716 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e0fb0a7fb9d3fb5480526e2c39baf9f77a996550dbd947ee0fcf0c051d44cdd0
	Dec 28 06:55:44 no-preload-950460 crio[765]: time="2025-12-28T06:55:44.167718283Z" level=info msg="Running pod sandbox: default/busybox/POD" id=7f65c5e9-50e2-48fd-9363-34bab76d4311 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 28 06:55:44 no-preload-950460 crio[765]: time="2025-12-28T06:55:44.167814524Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:55:44 no-preload-950460 crio[765]: time="2025-12-28T06:55:44.175478035Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:c3b8aa77901f5162c3d1b5893addda14ad4cdbfd0f58dd3f701f61c3dff8983a UID:d94669e7-4dff-498c-96af-58fd76221f43 NetNS:/var/run/netns/7b366e1b-f5c2-46a8-ac14-d36549766f6b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0003383a8}] Aliases:map[]}"
	Dec 28 06:55:44 no-preload-950460 crio[765]: time="2025-12-28T06:55:44.176294883Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 28 06:55:44 no-preload-950460 crio[765]: time="2025-12-28T06:55:44.201481165Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:c3b8aa77901f5162c3d1b5893addda14ad4cdbfd0f58dd3f701f61c3dff8983a UID:d94669e7-4dff-498c-96af-58fd76221f43 NetNS:/var/run/netns/7b366e1b-f5c2-46a8-ac14-d36549766f6b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0003383a8}] Aliases:map[]}"
	Dec 28 06:55:44 no-preload-950460 crio[765]: time="2025-12-28T06:55:44.201654765Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 28 06:55:44 no-preload-950460 crio[765]: time="2025-12-28T06:55:44.202837897Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 28 06:55:44 no-preload-950460 crio[765]: time="2025-12-28T06:55:44.203999708Z" level=info msg="Ran pod sandbox c3b8aa77901f5162c3d1b5893addda14ad4cdbfd0f58dd3f701f61c3dff8983a with infra container: default/busybox/POD" id=7f65c5e9-50e2-48fd-9363-34bab76d4311 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 28 06:55:44 no-preload-950460 crio[765]: time="2025-12-28T06:55:44.205542334Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a1aced49-6632-401b-97e3-b7d604a54f9a name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:55:44 no-preload-950460 crio[765]: time="2025-12-28T06:55:44.205687156Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=a1aced49-6632-401b-97e3-b7d604a54f9a name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:55:44 no-preload-950460 crio[765]: time="2025-12-28T06:55:44.205785936Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=a1aced49-6632-401b-97e3-b7d604a54f9a name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:55:44 no-preload-950460 crio[765]: time="2025-12-28T06:55:44.206689697Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=78ed313d-d729-4e18-9a04-a504e15fea9c name=/runtime.v1.ImageService/PullImage
	Dec 28 06:55:44 no-preload-950460 crio[765]: time="2025-12-28T06:55:44.207064577Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 28 06:55:45 no-preload-950460 crio[765]: time="2025-12-28T06:55:45.572732755Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=78ed313d-d729-4e18-9a04-a504e15fea9c name=/runtime.v1.ImageService/PullImage
	Dec 28 06:55:45 no-preload-950460 crio[765]: time="2025-12-28T06:55:45.573439924Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=17e3834b-cb2b-4dd3-819a-1ffe61b9f76f name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:55:45 no-preload-950460 crio[765]: time="2025-12-28T06:55:45.575060623Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=12abfd8b-a1a3-4fef-b44a-3690ea8431c5 name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:55:45 no-preload-950460 crio[765]: time="2025-12-28T06:55:45.579397076Z" level=info msg="Creating container: default/busybox/busybox" id=3ff62d95-fa8d-4cae-a68b-d255ef717a6a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:55:45 no-preload-950460 crio[765]: time="2025-12-28T06:55:45.579561484Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:55:45 no-preload-950460 crio[765]: time="2025-12-28T06:55:45.583367239Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:55:45 no-preload-950460 crio[765]: time="2025-12-28T06:55:45.583943243Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:55:45 no-preload-950460 crio[765]: time="2025-12-28T06:55:45.607713242Z" level=info msg="Created container d4b99efe7477702b4f557c26fd0570f587188c187094d3d96aa0e119f42956d6: default/busybox/busybox" id=3ff62d95-fa8d-4cae-a68b-d255ef717a6a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:55:45 no-preload-950460 crio[765]: time="2025-12-28T06:55:45.608503652Z" level=info msg="Starting container: d4b99efe7477702b4f557c26fd0570f587188c187094d3d96aa0e119f42956d6" id=34d11010-1b3d-4466-8e54-a2b5eaac7482 name=/runtime.v1.RuntimeService/StartContainer
	Dec 28 06:55:45 no-preload-950460 crio[765]: time="2025-12-28T06:55:45.61079483Z" level=info msg="Started container" PID=2915 containerID=d4b99efe7477702b4f557c26fd0570f587188c187094d3d96aa0e119f42956d6 description=default/busybox/busybox id=34d11010-1b3d-4466-8e54-a2b5eaac7482 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c3b8aa77901f5162c3d1b5893addda14ad4cdbfd0f58dd3f701f61c3dff8983a
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	d4b99efe74777       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   9 seconds ago       Running             busybox                   0                   c3b8aa77901f5       busybox                                     default
	5ec75da949282       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      13 seconds ago      Running             coredns                   0                   e0fb0a7fb9d3f       coredns-7d764666f9-npk6g                    kube-system
	c80838afae4e3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   dff3cfae9b482       storage-provisioner                         kube-system
	b292bfd7c1f69       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    25 seconds ago      Running             kindnet-cni               0                   989bd29db318b       kindnet-xhb7x                               kube-system
	53e5b46cc6980       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                      26 seconds ago      Running             kube-proxy                0                   789d2dbb97030       kube-proxy-294rn                            kube-system
	79ad5f6b53935       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                      36 seconds ago      Running             kube-apiserver            0                   23c4a2d149daf       kube-apiserver-no-preload-950460            kube-system
	38cb8f1756891       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                      36 seconds ago      Running             etcd                      0                   558275c2b23ec       etcd-no-preload-950460                      kube-system
	6338bd75d6cf0       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                      36 seconds ago      Running             kube-controller-manager   0                   ee8939b0fb8cb       kube-controller-manager-no-preload-950460   kube-system
	9612d8d766f54       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                      36 seconds ago      Running             kube-scheduler            0                   623b559166dba       kube-scheduler-no-preload-950460            kube-system
	
	
	==> describe nodes <==
	Name:               no-preload-950460
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-950460
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba
	                    minikube.k8s.io/name=no-preload-950460
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_28T06_55_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Dec 2025 06:55:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-950460
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 28 Dec 2025 06:55:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 28 Dec 2025 06:55:53 +0000   Sun, 28 Dec 2025 06:55:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 28 Dec 2025 06:55:53 +0000   Sun, 28 Dec 2025 06:55:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 28 Dec 2025 06:55:53 +0000   Sun, 28 Dec 2025 06:55:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 28 Dec 2025 06:55:53 +0000   Sun, 28 Dec 2025 06:55:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-950460
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 493159aea3d8b8768b108b926950835d
	  System UUID:                89ca7428-7fe3-48bf-8e6c-c80da5b6d3a1
	  Boot ID:                    e7a1d175-ccf2-4135-b9c7-3a9f70f4c4af
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-7d764666f9-npk6g                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-no-preload-950460                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-xhb7x                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-no-preload-950460             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-no-preload-950460    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-294rn                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-no-preload-950460             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  28s   node-controller  Node no-preload-950460 event: Registered Node no-preload-950460 in Controller
	
	
	==> dmesg <==
	[Dec28 06:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001811] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000998] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.386099] i8042: Warning: Keylock active
	[  +0.010472] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.485785] block sda: the capability attribute has been deprecated.
	[  +0.082391] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024584] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.071522] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 06:55:55 up 38 min,  0 user,  load average: 3.90, 2.66, 1.67
	Linux no-preload-950460 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 28 06:55:28 no-preload-950460 kubelet[2220]: I1228 06:55:28.193384    2220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4bab0d9b-3499-4546-bb8c-e47bfc17dbbf-xtables-lock\") pod \"kindnet-xhb7x\" (UID: \"4bab0d9b-3499-4546-bb8c-e47bfc17dbbf\") " pod="kube-system/kindnet-xhb7x"
	Dec 28 06:55:28 no-preload-950460 kubelet[2220]: I1228 06:55:28.193426    2220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/4bab0d9b-3499-4546-bb8c-e47bfc17dbbf-cni-cfg\") pod \"kindnet-xhb7x\" (UID: \"4bab0d9b-3499-4546-bb8c-e47bfc17dbbf\") " pod="kube-system/kindnet-xhb7x"
	Dec 28 06:55:28 no-preload-950460 kubelet[2220]: I1228 06:55:28.193457    2220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd2k7\" (UniqueName: \"kubernetes.io/projected/4bab0d9b-3499-4546-bb8c-e47bfc17dbbf-kube-api-access-rd2k7\") pod \"kindnet-xhb7x\" (UID: \"4bab0d9b-3499-4546-bb8c-e47bfc17dbbf\") " pod="kube-system/kindnet-xhb7x"
	Dec 28 06:55:28 no-preload-950460 kubelet[2220]: I1228 06:55:28.193559    2220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c88bb406-588c-45ec-9225-946af7327ec0-lib-modules\") pod \"kube-proxy-294rn\" (UID: \"c88bb406-588c-45ec-9225-946af7327ec0\") " pod="kube-system/kube-proxy-294rn"
	Dec 28 06:55:28 no-preload-950460 kubelet[2220]: E1228 06:55:28.591980    2220 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-950460" containerName="kube-scheduler"
	Dec 28 06:55:29 no-preload-950460 kubelet[2220]: E1228 06:55:29.030947    2220 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-950460" containerName="kube-apiserver"
	Dec 28 06:55:30 no-preload-950460 kubelet[2220]: I1228 06:55:30.506548    2220 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-294rn" podStartSLOduration=2.5065266 podStartE2EDuration="2.5065266s" podCreationTimestamp="2025-12-28 06:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-28 06:55:29.506222471 +0000 UTC m=+7.126218565" watchObservedRunningTime="2025-12-28 06:55:30.5065266 +0000 UTC m=+8.126522690"
	Dec 28 06:55:31 no-preload-950460 kubelet[2220]: E1228 06:55:31.679567    2220 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-950460" containerName="etcd"
	Dec 28 06:55:31 no-preload-950460 kubelet[2220]: I1228 06:55:31.692593    2220 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-xhb7x" podStartSLOduration=2.358018518 podStartE2EDuration="3.692575462s" podCreationTimestamp="2025-12-28 06:55:28 +0000 UTC" firstStartedPulling="2025-12-28 06:55:28.501168293 +0000 UTC m=+6.121164385" lastFinishedPulling="2025-12-28 06:55:29.835725248 +0000 UTC m=+7.455721329" observedRunningTime="2025-12-28 06:55:30.506919695 +0000 UTC m=+8.126915793" watchObservedRunningTime="2025-12-28 06:55:31.692575462 +0000 UTC m=+9.312571568"
	Dec 28 06:55:33 no-preload-950460 kubelet[2220]: E1228 06:55:33.241251    2220 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-950460" containerName="kube-controller-manager"
	Dec 28 06:55:38 no-preload-950460 kubelet[2220]: E1228 06:55:38.596457    2220 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-950460" containerName="kube-scheduler"
	Dec 28 06:55:39 no-preload-950460 kubelet[2220]: E1228 06:55:39.037715    2220 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-950460" containerName="kube-apiserver"
	Dec 28 06:55:40 no-preload-950460 kubelet[2220]: I1228 06:55:40.875579    2220 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 28 06:55:40 no-preload-950460 kubelet[2220]: I1228 06:55:40.988060    2220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a4076523-c034-4331-8dd7-a506e9dec2d9-tmp\") pod \"storage-provisioner\" (UID: \"a4076523-c034-4331-8dd7-a506e9dec2d9\") " pod="kube-system/storage-provisioner"
	Dec 28 06:55:40 no-preload-950460 kubelet[2220]: I1228 06:55:40.988186    2220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3cc436b-e460-483e-99aa-f7d44599d666-config-volume\") pod \"coredns-7d764666f9-npk6g\" (UID: \"a3cc436b-e460-483e-99aa-f7d44599d666\") " pod="kube-system/coredns-7d764666f9-npk6g"
	Dec 28 06:55:40 no-preload-950460 kubelet[2220]: I1228 06:55:40.988231    2220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc84d\" (UniqueName: \"kubernetes.io/projected/a3cc436b-e460-483e-99aa-f7d44599d666-kube-api-access-dc84d\") pod \"coredns-7d764666f9-npk6g\" (UID: \"a3cc436b-e460-483e-99aa-f7d44599d666\") " pod="kube-system/coredns-7d764666f9-npk6g"
	Dec 28 06:55:40 no-preload-950460 kubelet[2220]: I1228 06:55:40.988267    2220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cs8p\" (UniqueName: \"kubernetes.io/projected/a4076523-c034-4331-8dd7-a506e9dec2d9-kube-api-access-8cs8p\") pod \"storage-provisioner\" (UID: \"a4076523-c034-4331-8dd7-a506e9dec2d9\") " pod="kube-system/storage-provisioner"
	Dec 28 06:55:41 no-preload-950460 kubelet[2220]: E1228 06:55:41.519340    2220 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-npk6g" containerName="coredns"
	Dec 28 06:55:41 no-preload-950460 kubelet[2220]: I1228 06:55:41.535741    2220 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-npk6g" podStartSLOduration=13.53572528 podStartE2EDuration="13.53572528s" podCreationTimestamp="2025-12-28 06:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-28 06:55:41.535164746 +0000 UTC m=+19.155160849" watchObservedRunningTime="2025-12-28 06:55:41.53572528 +0000 UTC m=+19.155721384"
	Dec 28 06:55:41 no-preload-950460 kubelet[2220]: E1228 06:55:41.681260    2220 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-950460" containerName="etcd"
	Dec 28 06:55:41 no-preload-950460 kubelet[2220]: I1228 06:55:41.691243    2220 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.69122238 podStartE2EDuration="13.69122238s" podCreationTimestamp="2025-12-28 06:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-28 06:55:41.571120344 +0000 UTC m=+19.191116445" watchObservedRunningTime="2025-12-28 06:55:41.69122238 +0000 UTC m=+19.311218480"
	Dec 28 06:55:42 no-preload-950460 kubelet[2220]: E1228 06:55:42.524507    2220 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-npk6g" containerName="coredns"
	Dec 28 06:55:43 no-preload-950460 kubelet[2220]: E1228 06:55:43.526587    2220 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-npk6g" containerName="coredns"
	Dec 28 06:55:43 no-preload-950460 kubelet[2220]: I1228 06:55:43.906675    2220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpjl7\" (UniqueName: \"kubernetes.io/projected/d94669e7-4dff-498c-96af-58fd76221f43-kube-api-access-kpjl7\") pod \"busybox\" (UID: \"d94669e7-4dff-498c-96af-58fd76221f43\") " pod="default/busybox"
	Dec 28 06:55:52 no-preload-950460 kubelet[2220]: E1228 06:55:52.980568    2220 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:46574->127.0.0.1:42439: write tcp 127.0.0.1:46574->127.0.0.1:42439: write: broken pipe
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1228 06:55:54.316658  248213 logs.go:279] Failed to list containers for "kube-apiserver": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:55:54Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:55:54.389204  248213 logs.go:279] Failed to list containers for "etcd": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:55:54Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:55:54.481499  248213 logs.go:279] Failed to list containers for "coredns": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:55:54Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:55:54.563149  248213 logs.go:279] Failed to list containers for "kube-scheduler": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:55:54Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:55:54.626585  248213 logs.go:279] Failed to list containers for "kube-proxy": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:55:54Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:55:54.705981  248213 logs.go:279] Failed to list containers for "kube-controller-manager": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:55:54Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:55:54.782465  248213 logs.go:279] Failed to list containers for "kindnet": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:55:54Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:55:54.862737  248213 logs.go:279] Failed to list containers for "storage-provisioner": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:55:54Z" level=error msg="open /run/runc: no such file or directory"

                                                
                                                
** /stderr **
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-950460 -n no-preload-950460
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-950460 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-422591 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-422591 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (257.145844ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:34Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-422591 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-422591 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-422591 describe deploy/metrics-server -n kube-system: exit status 1 (59.00356ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-422591 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-422591
helpers_test.go:244: (dbg) docker inspect embed-certs-422591:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ceaa376452cd4a7bcca9492d34bd5d364cb5ab63050b743bf10cdfb3e5e115af",
	        "Created": "2025-12-28T06:55:48.729729272Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 245607,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-28T06:55:48.765329082Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8b8cccb9afb2a57c3d011fcf33e0403b1551aa7036e30b12a395646869801935",
	        "ResolvConfPath": "/var/lib/docker/containers/ceaa376452cd4a7bcca9492d34bd5d364cb5ab63050b743bf10cdfb3e5e115af/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ceaa376452cd4a7bcca9492d34bd5d364cb5ab63050b743bf10cdfb3e5e115af/hostname",
	        "HostsPath": "/var/lib/docker/containers/ceaa376452cd4a7bcca9492d34bd5d364cb5ab63050b743bf10cdfb3e5e115af/hosts",
	        "LogPath": "/var/lib/docker/containers/ceaa376452cd4a7bcca9492d34bd5d364cb5ab63050b743bf10cdfb3e5e115af/ceaa376452cd4a7bcca9492d34bd5d364cb5ab63050b743bf10cdfb3e5e115af-json.log",
	        "Name": "/embed-certs-422591",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-422591:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-422591",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ceaa376452cd4a7bcca9492d34bd5d364cb5ab63050b743bf10cdfb3e5e115af",
	                "LowerDir": "/var/lib/docker/overlay2/aa50c03544bef69bef974a2d5c791199be0e99174b206655dc5df29bb78e3943-init/diff:/var/lib/docker/overlay2/69e554713d6cc3cb33e7ea5f93430536a8ca0db38320574d3719c26f00b2f62c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/aa50c03544bef69bef974a2d5c791199be0e99174b206655dc5df29bb78e3943/merged",
	                "UpperDir": "/var/lib/docker/overlay2/aa50c03544bef69bef974a2d5c791199be0e99174b206655dc5df29bb78e3943/diff",
	                "WorkDir": "/var/lib/docker/overlay2/aa50c03544bef69bef974a2d5c791199be0e99174b206655dc5df29bb78e3943/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-422591",
	                "Source": "/var/lib/docker/volumes/embed-certs-422591/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-422591",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-422591",
	                "name.minikube.sigs.k8s.io": "embed-certs-422591",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "420443dca68ba1060e509e09b473b91d2e74d2375db19f5e9b707228a68d7289",
	            "SandboxKey": "/var/run/docker/netns/420443dca68b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-422591": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4435fbd1d5af1aad2bc3ae8af8af55a14dd14ed989f116744286ee3cfc1b4c5c",
	                    "EndpointID": "b04b372b0e0503b65f1b1ea6798321c0f19109908e2937e70f3b8f127f53f52d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "16:ae:18:01:c4:58",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-422591",
	                        "ceaa376452cd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-422591 -n embed-certs-422591
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-422591 logs -n 25
helpers_test.go:261: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p kubernetes-upgrade-450365 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-450365    │ jenkins │ v1.37.0 │ 28 Dec 25 06:54 UTC │                     │
	│ start   │ -p kubernetes-upgrade-450365 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-450365    │ jenkins │ v1.37.0 │ 28 Dec 25 06:54 UTC │ 28 Dec 25 06:54 UTC │
	│ delete  │ -p kubernetes-upgrade-450365                                                                                                                                                                                                                  │ kubernetes-upgrade-450365    │ jenkins │ v1.37.0 │ 28 Dec 25 06:54 UTC │ 28 Dec 25 06:54 UTC │
	│ start   │ -p old-k8s-version-694122 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:54 UTC │ 28 Dec 25 06:55 UTC │
	│ image   │ test-preload-785573 image pull ghcr.io/medyagh/image-mirrors/busybox:latest                                                                                                                                                                   │ test-preload-785573          │ jenkins │ v1.37.0 │ 28 Dec 25 06:54 UTC │ 28 Dec 25 06:54 UTC │
	│ stop    │ -p test-preload-785573                                                                                                                                                                                                                        │ test-preload-785573          │ jenkins │ v1.37.0 │ 28 Dec 25 06:54 UTC │ 28 Dec 25 06:54 UTC │
	│ start   │ -p cert-expiration-623987 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-623987       │ jenkins │ v1.37.0 │ 28 Dec 25 06:54 UTC │ 28 Dec 25 06:54 UTC │
	│ start   │ -p test-preload-785573 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio                                                                                                                            │ test-preload-785573          │ jenkins │ v1.37.0 │ 28 Dec 25 06:54 UTC │ 28 Dec 25 06:55 UTC │
	│ delete  │ -p cert-expiration-623987                                                                                                                                                                                                                     │ cert-expiration-623987       │ jenkins │ v1.37.0 │ 28 Dec 25 06:54 UTC │ 28 Dec 25 06:54 UTC │
	│ start   │ -p no-preload-950460 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-694122 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │                     │
	│ stop    │ -p old-k8s-version-694122 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-694122 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ start   │ -p old-k8s-version-694122 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:56 UTC │
	│ image   │ test-preload-785573 image list                                                                                                                                                                                                                │ test-preload-785573          │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ delete  │ -p test-preload-785573                                                                                                                                                                                                                        │ test-preload-785573          │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ start   │ -p embed-certs-422591 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:56 UTC │
	│ delete  │ -p stopped-upgrade-416029                                                                                                                                                                                                                     │ stopped-upgrade-416029       │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ delete  │ -p disable-driver-mounts-719168                                                                                                                                                                                                               │ disable-driver-mounts-719168 │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ start   │ -p default-k8s-diff-port-500581 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:56 UTC │
	│ addons  │ enable metrics-server -p no-preload-950460 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │                     │
	│ stop    │ -p no-preload-950460 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:56 UTC │
	│ addons  │ enable dashboard -p no-preload-950460 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ start   │ -p no-preload-950460 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-422591 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 06:56:09
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 06:56:09.683208  252331 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:56:09.683522  252331 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:56:09.683533  252331 out.go:374] Setting ErrFile to fd 2...
	I1228 06:56:09.683539  252331 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:56:09.683817  252331 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:56:09.684408  252331 out.go:368] Setting JSON to false
	I1228 06:56:09.686138  252331 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2322,"bootTime":1766902648,"procs":376,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1228 06:56:09.686216  252331 start.go:143] virtualization: kvm guest
	I1228 06:56:09.688379  252331 out.go:179] * [no-preload-950460] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1228 06:56:09.689966  252331 notify.go:221] Checking for updates...
	I1228 06:56:09.690624  252331 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 06:56:09.691759  252331 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 06:56:09.693287  252331 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:56:09.694542  252331 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-5550/.minikube
	I1228 06:56:09.696489  252331 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1228 06:56:09.698353  252331 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 06:56:09.700204  252331 config.go:182] Loaded profile config "no-preload-950460": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:56:09.700981  252331 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 06:56:09.731534  252331 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1228 06:56:09.731673  252331 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:56:09.809872  252331 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-28 06:56:09.797345649 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:56:09.810012  252331 docker.go:319] overlay module found
	I1228 06:56:09.811872  252331 out.go:179] * Using the docker driver based on existing profile
	I1228 06:56:09.813113  252331 start.go:309] selected driver: docker
	I1228 06:56:09.813141  252331 start.go:928] validating driver "docker" against &{Name:no-preload-950460 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-950460 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:56:09.813261  252331 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 06:56:09.814183  252331 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:56:09.889225  252331 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-28 06:56:09.87743098 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:56:09.889583  252331 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 06:56:09.889616  252331 cni.go:84] Creating CNI manager for ""
	I1228 06:56:09.889688  252331 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1228 06:56:09.889728  252331 start.go:353] cluster config:
	{Name:no-preload-950460 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-950460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:56:09.892491  252331 out.go:179] * Starting "no-preload-950460" primary control-plane node in "no-preload-950460" cluster
	I1228 06:56:09.893559  252331 cache.go:134] Beginning downloading kic base image for docker with crio
	I1228 06:56:09.895822  252331 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 06:56:09.897246  252331 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1228 06:56:09.897378  252331 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/config.json ...
	I1228 06:56:09.897665  252331 cache.go:107] acquiring lock: {Name:mkd9176dc8bfe34090aff279f6f101ea6f0af9cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:56:09.897748  252331 cache.go:115] /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1228 06:56:09.897763  252331 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 110.737µs
	I1228 06:56:09.897776  252331 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1228 06:56:09.897792  252331 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 06:56:09.897921  252331 cache.go:107] acquiring lock: {Name:mk7d35a6d2b389149dcbeab5c7c2ffb31f57d65c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:56:09.898003  252331 cache.go:115] /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0 exists
	I1228 06:56:09.898018  252331 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0" -> "/home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0" took 105.145µs
	I1228 06:56:09.898051  252331 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0 -> /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0 succeeded
	I1228 06:56:09.898068  252331 cache.go:107] acquiring lock: {Name:mk242447cc3bf85a80c449b21152ddfbb942621c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:56:09.898065  252331 cache.go:107] acquiring lock: {Name:mke2c1949855d4a55e5668b0d2ae93b37c482c30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:56:09.898080  252331 cache.go:107] acquiring lock: {Name:mk532de4689e044277857a73866e5969a2e4fbc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:56:09.898114  252331 cache.go:115] /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0 exists
	I1228 06:56:09.898091  252331 cache.go:107] acquiring lock: {Name:mke47ac9c7c044600bef8f6b93ef0e26dc8302f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:56:09.898122  252331 cache.go:115] /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0 exists
	I1228 06:56:09.898122  252331 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0" -> "/home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0" took 56.777µs
	I1228 06:56:09.898131  252331 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0 -> /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0 succeeded
	I1228 06:56:09.898131  252331 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0" -> "/home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0" took 104.803µs
	I1228 06:56:09.898140  252331 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0 -> /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0 succeeded
	I1228 06:56:09.898147  252331 cache.go:107] acquiring lock: {Name:mk9e59e568752d1ca479b7f88a0993095cc4ab42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:56:09.898154  252331 cache.go:107] acquiring lock: {Name:mk4a1a601fb4bce5015f4152fc8c90f967d969a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:56:09.898175  252331 cache.go:115] /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 exists
	I1228 06:56:09.898185  252331 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0" took 104.327µs
	I1228 06:56:09.898197  252331 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1228 06:56:09.898201  252331 cache.go:115] /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0 exists
	I1228 06:56:09.898209  252331 cache.go:115] /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1228 06:56:09.898214  252331 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0" -> "/home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0" took 145.471µs
	I1228 06:56:09.898217  252331 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 65.787µs
	I1228 06:56:09.898225  252331 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0 -> /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0 succeeded
	I1228 06:56:09.898228  252331 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1228 06:56:09.898247  252331 cache.go:115] /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1228 06:56:09.898255  252331 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 110.483µs
	I1228 06:56:09.898263  252331 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1228 06:56:09.898271  252331 cache.go:87] Successfully saved all images to host disk.
	I1228 06:56:09.925389  252331 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 06:56:09.925420  252331 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
	I1228 06:56:09.925442  252331 cache.go:243] Successfully downloaded all kic artifacts
	I1228 06:56:09.925482  252331 start.go:360] acquireMachinesLock for no-preload-950460: {Name:mk62d7b73784bafca52412532a69147c30805a01 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:56:09.925562  252331 start.go:364] duration metric: took 47.499µs to acquireMachinesLock for "no-preload-950460"
	I1228 06:56:09.925594  252331 start.go:96] Skipping create...Using existing machine configuration
	I1228 06:56:09.925604  252331 fix.go:54] fixHost starting: 
	I1228 06:56:09.925883  252331 cli_runner.go:164] Run: docker container inspect no-preload-950460 --format={{.State.Status}}
	I1228 06:56:09.947427  252331 fix.go:112] recreateIfNeeded on no-preload-950460: state=Stopped err=<nil>
	W1228 06:56:09.947470  252331 fix.go:138] unexpected machine state, will restart: <nil>
	I1228 06:56:09.244143  243963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:09.744639  243963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:10.244325  243963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:10.364365  243963 kubeadm.go:1114] duration metric: took 4.219411016s to wait for elevateKubeSystemPrivileges
	I1228 06:56:10.364473  243963 kubeadm.go:403] duration metric: took 12.104828541s to StartCluster
	I1228 06:56:10.364513  243963 settings.go:142] acquiring lock: {Name:mk84c1d4c127eaf11c7cc5cc16de86f86f2dcbe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:10.364574  243963 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:56:10.367334  243963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/kubeconfig: {Name:mke0b45a06b272ee5aa493b62cdbe6f2a53c0aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:10.367689  243963 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1228 06:56:10.368151  243963 config.go:182] Loaded profile config "embed-certs-422591": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:56:10.368391  243963 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1228 06:56:10.368490  243963 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-422591"
	I1228 06:56:10.368509  243963 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-422591"
	I1228 06:56:10.368558  243963 host.go:66] Checking if "embed-certs-422591" exists ...
	I1228 06:56:10.369000  243963 cli_runner.go:164] Run: docker container inspect embed-certs-422591 --format={{.State.Status}}
	I1228 06:56:10.369135  243963 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1228 06:56:10.369221  243963 addons.go:70] Setting default-storageclass=true in profile "embed-certs-422591"
	I1228 06:56:10.369280  243963 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-422591"
	I1228 06:56:10.369857  243963 cli_runner.go:164] Run: docker container inspect embed-certs-422591 --format={{.State.Status}}
	I1228 06:56:10.370623  243963 out.go:179] * Verifying Kubernetes components...
	I1228 06:56:10.374484  243963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:56:10.403086  243963 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1228 06:56:07.752961  242715 pod_ready.go:104] pod "coredns-5dd5756b68-f75js" is not "Ready", error: <nil>
	W1228 06:56:09.756311  242715 pod_ready.go:104] pod "coredns-5dd5756b68-f75js" is not "Ready", error: <nil>
	I1228 06:56:10.405267  243963 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:56:10.405293  243963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1228 06:56:10.405355  243963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422591
	I1228 06:56:10.407121  243963 addons.go:239] Setting addon default-storageclass=true in "embed-certs-422591"
	I1228 06:56:10.407166  243963 host.go:66] Checking if "embed-certs-422591" exists ...
	I1228 06:56:10.408137  243963 cli_runner.go:164] Run: docker container inspect embed-certs-422591 --format={{.State.Status}}
	I1228 06:56:10.438924  243963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/embed-certs-422591/id_rsa Username:docker}
	I1228 06:56:10.442747  243963 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1228 06:56:10.442772  243963 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1228 06:56:10.442827  243963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422591
	I1228 06:56:10.477359  243963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/embed-certs-422591/id_rsa Username:docker}
	I1228 06:56:10.532358  243963 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1228 06:56:10.573979  243963 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:56:10.588218  243963 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:56:10.648019  243963 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1228 06:56:10.867869  243963 start.go:987] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1228 06:56:11.085832  243963 node_ready.go:35] waiting up to 6m0s for node "embed-certs-422591" to be "Ready" ...
	I1228 06:56:11.095783  243963 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1228 06:56:09.058672  247213 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1228 06:56:09.063442  247213 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1228 06:56:09.063466  247213 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1228 06:56:09.077870  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1228 06:56:09.407176  247213 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1228 06:56:09.407367  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:09.407468  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-500581 minikube.k8s.io/updated_at=2025_12_28T06_56_09_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba minikube.k8s.io/name=default-k8s-diff-port-500581 minikube.k8s.io/primary=true
	I1228 06:56:09.580457  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:09.580543  247213 ops.go:34] apiserver oom_adj: -16
	I1228 06:56:10.080579  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:10.581243  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:11.080638  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:11.581312  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:12.080705  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:12.580620  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:13.081161  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:13.581441  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:13.652690  247213 kubeadm.go:1114] duration metric: took 4.245373726s to wait for elevateKubeSystemPrivileges
	I1228 06:56:13.652726  247213 kubeadm.go:403] duration metric: took 12.364737655s to StartCluster
	I1228 06:56:13.652748  247213 settings.go:142] acquiring lock: {Name:mk84c1d4c127eaf11c7cc5cc16de86f86f2dcbe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:13.652812  247213 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:56:13.654909  247213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/kubeconfig: {Name:mke0b45a06b272ee5aa493b62cdbe6f2a53c0aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:13.655206  247213 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1228 06:56:13.655359  247213 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1228 06:56:13.655613  247213 config.go:182] Loaded profile config "default-k8s-diff-port-500581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:56:13.655657  247213 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1228 06:56:13.655720  247213 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-500581"
	I1228 06:56:13.655737  247213 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-500581"
	I1228 06:56:13.655761  247213 host.go:66] Checking if "default-k8s-diff-port-500581" exists ...
	I1228 06:56:13.656261  247213 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-500581"
	I1228 06:56:13.656283  247213 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-500581"
	I1228 06:56:13.656613  247213 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-500581 --format={{.State.Status}}
	I1228 06:56:13.657602  247213 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-500581 --format={{.State.Status}}
	I1228 06:56:13.660155  247213 out.go:179] * Verifying Kubernetes components...
	I1228 06:56:13.661579  247213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:56:13.684520  247213 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1228 06:56:11.097178  243963 addons.go:530] duration metric: took 728.781424ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1228 06:56:11.372202  243963 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-422591" context rescaled to 1 replicas
	W1228 06:56:13.088569  243963 node_ready.go:57] node "embed-certs-422591" has "Ready":"False" status (will retry)
	I1228 06:56:13.685585  247213 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:56:13.685607  247213 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1228 06:56:13.685662  247213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-500581
	I1228 06:56:13.686151  247213 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-500581"
	I1228 06:56:13.686203  247213 host.go:66] Checking if "default-k8s-diff-port-500581" exists ...
	I1228 06:56:13.686699  247213 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-500581 --format={{.State.Status}}
	I1228 06:56:13.718321  247213 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1228 06:56:13.718423  247213 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1228 06:56:13.718565  247213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-500581
	I1228 06:56:13.728024  247213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/default-k8s-diff-port-500581/id_rsa Username:docker}
	I1228 06:56:13.751115  247213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/default-k8s-diff-port-500581/id_rsa Username:docker}
	I1228 06:56:13.767540  247213 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1228 06:56:13.826652  247213 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:56:13.845102  247213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:56:13.860783  247213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1228 06:56:13.971728  247213 start.go:987] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1228 06:56:13.973616  247213 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-500581" to be "Ready" ...
	I1228 06:56:14.185139  247213 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1228 06:56:09.949330  252331 out.go:252] * Restarting existing docker container for "no-preload-950460" ...
	I1228 06:56:09.949409  252331 cli_runner.go:164] Run: docker start no-preload-950460
	I1228 06:56:10.304369  252331 cli_runner.go:164] Run: docker container inspect no-preload-950460 --format={{.State.Status}}
	I1228 06:56:10.333247  252331 kic.go:430] container "no-preload-950460" state is running.
	I1228 06:56:10.333791  252331 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-950460
	I1228 06:56:10.362343  252331 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/config.json ...
	I1228 06:56:10.362749  252331 machine.go:94] provisionDockerMachine start ...
	I1228 06:56:10.362898  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:10.399401  252331 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:10.400763  252331 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1228 06:56:10.400782  252331 main.go:144] libmachine: About to run SSH command:
	hostname
	I1228 06:56:10.401698  252331 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42784->127.0.0.1:33078: read: connection reset by peer
	I1228 06:56:13.530578  252331 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-950460
	
	I1228 06:56:13.530607  252331 ubuntu.go:182] provisioning hostname "no-preload-950460"
	I1228 06:56:13.530671  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:13.551523  252331 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:13.551766  252331 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1228 06:56:13.551782  252331 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-950460 && echo "no-preload-950460" | sudo tee /etc/hostname
	I1228 06:56:13.697078  252331 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-950460
	
	I1228 06:56:13.697213  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:13.734170  252331 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:13.734651  252331 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1228 06:56:13.734718  252331 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-950460' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-950460/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-950460' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1228 06:56:13.876570  252331 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1228 06:56:13.876646  252331 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22352-5550/.minikube CaCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22352-5550/.minikube}
	I1228 06:56:13.878995  252331 ubuntu.go:190] setting up certificates
	I1228 06:56:13.879017  252331 provision.go:84] configureAuth start
	I1228 06:56:13.879096  252331 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-950460
	I1228 06:56:13.902076  252331 provision.go:143] copyHostCerts
	I1228 06:56:13.902141  252331 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem, removing ...
	I1228 06:56:13.902162  252331 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem
	I1228 06:56:13.902253  252331 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem (1082 bytes)
	I1228 06:56:13.902388  252331 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem, removing ...
	I1228 06:56:13.902401  252331 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem
	I1228 06:56:13.902438  252331 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem (1123 bytes)
	I1228 06:56:13.902511  252331 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem, removing ...
	I1228 06:56:13.902520  252331 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem
	I1228 06:56:13.902560  252331 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem (1679 bytes)
	I1228 06:56:13.902624  252331 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem org=jenkins.no-preload-950460 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-950460]
	I1228 06:56:14.048352  252331 provision.go:177] copyRemoteCerts
	I1228 06:56:14.048419  252331 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1228 06:56:14.048452  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:14.068611  252331 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/no-preload-950460/id_rsa Username:docker}
	I1228 06:56:14.168261  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1228 06:56:14.190018  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1228 06:56:14.208765  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1228 06:56:14.226610  252331 provision.go:87] duration metric: took 347.581995ms to configureAuth
	I1228 06:56:14.226635  252331 ubuntu.go:206] setting minikube options for container-runtime
	I1228 06:56:14.226812  252331 config.go:182] Loaded profile config "no-preload-950460": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:56:14.226900  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:14.244598  252331 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:14.244866  252331 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1228 06:56:14.244892  252331 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1228 06:56:12.253209  242715 pod_ready.go:104] pod "coredns-5dd5756b68-f75js" is not "Ready", error: <nil>
	W1228 06:56:14.796990  242715 pod_ready.go:104] pod "coredns-5dd5756b68-f75js" is not "Ready", error: <nil>
	I1228 06:56:15.100866  252331 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1228 06:56:15.100892  252331 machine.go:97] duration metric: took 4.738124144s to provisionDockerMachine
	I1228 06:56:15.100904  252331 start.go:293] postStartSetup for "no-preload-950460" (driver="docker")
	I1228 06:56:15.100918  252331 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1228 06:56:15.101012  252331 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1228 06:56:15.101073  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:15.125860  252331 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/no-preload-950460/id_rsa Username:docker}
	I1228 06:56:15.230154  252331 ssh_runner.go:195] Run: cat /etc/os-release
	I1228 06:56:15.234858  252331 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1228 06:56:15.234891  252331 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1228 06:56:15.234905  252331 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-5550/.minikube/addons for local assets ...
	I1228 06:56:15.234956  252331 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-5550/.minikube/files for local assets ...
	I1228 06:56:15.235108  252331 filesync.go:149] local asset: /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem -> 90762.pem in /etc/ssl/certs
	I1228 06:56:15.235252  252331 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1228 06:56:15.245155  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem --> /etc/ssl/certs/90762.pem (1708 bytes)
	I1228 06:56:15.268602  252331 start.go:296] duration metric: took 167.682246ms for postStartSetup
	I1228 06:56:15.268700  252331 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 06:56:15.268759  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:15.288607  252331 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/no-preload-950460/id_rsa Username:docker}
	I1228 06:56:15.381324  252331 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1228 06:56:15.386166  252331 fix.go:56] duration metric: took 5.460557205s for fixHost
	I1228 06:56:15.386193  252331 start.go:83] releasing machines lock for "no-preload-950460", held for 5.460617152s
	I1228 06:56:15.386267  252331 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-950460
	I1228 06:56:15.405738  252331 ssh_runner.go:195] Run: cat /version.json
	I1228 06:56:15.405806  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:15.405845  252331 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1228 06:56:15.405936  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:15.426086  252331 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/no-preload-950460/id_rsa Username:docker}
	I1228 06:56:15.426572  252331 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/no-preload-950460/id_rsa Username:docker}
	I1228 06:56:15.573340  252331 ssh_runner.go:195] Run: systemctl --version
	I1228 06:56:15.580022  252331 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1228 06:56:15.614860  252331 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1228 06:56:15.619799  252331 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1228 06:56:15.619859  252331 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1228 06:56:15.627841  252331 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1228 06:56:15.627863  252331 start.go:496] detecting cgroup driver to use...
	I1228 06:56:15.627897  252331 detect.go:190] detected "systemd" cgroup driver on host os
	I1228 06:56:15.627935  252331 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1228 06:56:15.643627  252331 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1228 06:56:15.656486  252331 docker.go:218] disabling cri-docker service (if available) ...
	I1228 06:56:15.656542  252331 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1228 06:56:15.670796  252331 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1228 06:56:15.683099  252331 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1228 06:56:15.763732  252331 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1228 06:56:15.846193  252331 docker.go:234] disabling docker service ...
	I1228 06:56:15.846248  252331 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1228 06:56:15.860365  252331 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1228 06:56:15.872316  252331 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1228 06:56:15.952498  252331 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1228 06:56:16.036768  252331 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1228 06:56:16.048883  252331 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 06:56:16.062667  252331 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1228 06:56:16.062719  252331 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:16.072039  252331 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1228 06:56:16.072100  252331 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:16.080521  252331 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:16.089148  252331 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:16.097405  252331 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1228 06:56:16.105158  252331 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:16.113413  252331 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:16.122659  252331 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:16.131327  252331 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1228 06:56:16.138849  252331 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1228 06:56:16.145687  252331 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:56:16.222679  252331 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1228 06:56:16.520445  252331 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1228 06:56:16.520595  252331 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1228 06:56:16.524711  252331 start.go:574] Will wait 60s for crictl version
	I1228 06:56:16.524766  252331 ssh_runner.go:195] Run: which crictl
	I1228 06:56:16.528189  252331 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1228 06:56:16.553043  252331 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1228 06:56:16.553151  252331 ssh_runner.go:195] Run: crio --version
	I1228 06:56:16.580248  252331 ssh_runner.go:195] Run: crio --version
	I1228 06:56:16.608534  252331 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1228 06:56:14.186403  247213 addons.go:530] duration metric: took 530.739381ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1228 06:56:14.479845  247213 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-500581" context rescaled to 1 replicas
	W1228 06:56:15.976454  247213 node_ready.go:57] node "default-k8s-diff-port-500581" has "Ready":"False" status (will retry)
	I1228 06:56:16.609592  252331 cli_runner.go:164] Run: docker network inspect no-preload-950460 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 06:56:16.626775  252331 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1228 06:56:16.630900  252331 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 06:56:16.641409  252331 kubeadm.go:884] updating cluster {Name:no-preload-950460 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-950460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 06:56:16.641518  252331 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1228 06:56:16.641556  252331 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 06:56:16.675102  252331 crio.go:631] all images are preloaded for cri-o runtime.
	I1228 06:56:16.675123  252331 cache_images.go:86] Images are preloaded, skipping loading
	I1228 06:56:16.675129  252331 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0 crio true true} ...
	I1228 06:56:16.675244  252331 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-950460 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:no-preload-950460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 06:56:16.675331  252331 ssh_runner.go:195] Run: crio config
	I1228 06:56:16.718702  252331 cni.go:84] Creating CNI manager for ""
	I1228 06:56:16.718733  252331 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1228 06:56:16.718752  252331 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1228 06:56:16.718789  252331 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-950460 NodeName:no-preload-950460 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 06:56:16.718988  252331 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-950460"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 06:56:16.719070  252331 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1228 06:56:16.727836  252331 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 06:56:16.727925  252331 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 06:56:16.735688  252331 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1228 06:56:16.748533  252331 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 06:56:16.761180  252331 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1228 06:56:16.774346  252331 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1228 06:56:16.777963  252331 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 06:56:16.787778  252331 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:56:16.870258  252331 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:56:16.897229  252331 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460 for IP: 192.168.94.2
	I1228 06:56:16.897252  252331 certs.go:195] generating shared ca certs ...
	I1228 06:56:16.897273  252331 certs.go:227] acquiring lock for ca certs: {Name:mk77ee411d20e2d367f536371cb4debf1ce5f664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:16.897417  252331 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-5550/.minikube/ca.key
	I1228 06:56:16.897469  252331 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.key
	I1228 06:56:16.897483  252331 certs.go:257] generating profile certs ...
	I1228 06:56:16.897565  252331 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/client.key
	I1228 06:56:16.897621  252331 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/apiserver.key.3468f947
	I1228 06:56:16.897659  252331 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/proxy-client.key
	I1228 06:56:16.897752  252331 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076.pem (1338 bytes)
	W1228 06:56:16.897786  252331 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076_empty.pem, impossibly tiny 0 bytes
	I1228 06:56:16.897800  252331 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem (1675 bytes)
	I1228 06:56:16.897832  252331 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem (1082 bytes)
	I1228 06:56:16.897861  252331 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem (1123 bytes)
	I1228 06:56:16.897894  252331 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem (1679 bytes)
	I1228 06:56:16.897943  252331 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem (1708 bytes)
	I1228 06:56:16.898713  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 06:56:16.917010  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1228 06:56:16.936367  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 06:56:16.957237  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 06:56:16.980495  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1228 06:56:16.998372  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1228 06:56:17.015059  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 06:56:17.031891  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1228 06:56:17.049280  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem --> /usr/share/ca-certificates/90762.pem (1708 bytes)
	I1228 06:56:17.065663  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 06:56:17.082832  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076.pem --> /usr/share/ca-certificates/9076.pem (1338 bytes)
	I1228 06:56:17.100902  252331 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 06:56:17.113166  252331 ssh_runner.go:195] Run: openssl version
	I1228 06:56:17.119103  252331 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:17.126689  252331 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 06:56:17.134233  252331 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:17.137970  252331 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:28 /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:17.138010  252331 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:17.174376  252331 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 06:56:17.182094  252331 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9076.pem
	I1228 06:56:17.189546  252331 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9076.pem /etc/ssl/certs/9076.pem
	I1228 06:56:17.196673  252331 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9076.pem
	I1228 06:56:17.200312  252331 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:31 /usr/share/ca-certificates/9076.pem
	I1228 06:56:17.200355  252331 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9076.pem
	I1228 06:56:17.235404  252331 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 06:56:17.243056  252331 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/90762.pem
	I1228 06:56:17.251423  252331 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/90762.pem /etc/ssl/certs/90762.pem
	I1228 06:56:17.259118  252331 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/90762.pem
	I1228 06:56:17.262689  252331 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:31 /usr/share/ca-certificates/90762.pem
	I1228 06:56:17.262740  252331 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/90762.pem
	I1228 06:56:17.298353  252331 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 06:56:17.306420  252331 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 06:56:17.310366  252331 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1228 06:56:17.344608  252331 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1228 06:56:17.380698  252331 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1228 06:56:17.426014  252331 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1228 06:56:17.474223  252331 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1228 06:56:17.531854  252331 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1228 06:56:17.577281  252331 kubeadm.go:401] StartCluster: {Name:no-preload-950460 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-950460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:56:17.577434  252331 ssh_runner.go:195] Run: sudo crio config
	I1228 06:56:17.636151  252331 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	W1228 06:56:17.648977  252331 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:17Z" level=error msg="open /run/runc: no such file or directory"
	I1228 06:56:17.649067  252331 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 06:56:17.657728  252331 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1228 06:56:17.657748  252331 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1228 06:56:17.657796  252331 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1228 06:56:17.666778  252331 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1228 06:56:17.668081  252331 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-950460" does not appear in /home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:56:17.668996  252331 kubeconfig.go:62] /home/jenkins/minikube-integration/22352-5550/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-950460" cluster setting kubeconfig missing "no-preload-950460" context setting]
	I1228 06:56:17.670453  252331 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/kubeconfig: {Name:mke0b45a06b272ee5aa493b62cdbe6f2a53c0aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:17.672683  252331 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1228 06:56:17.683544  252331 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1228 06:56:17.683585  252331 kubeadm.go:602] duration metric: took 25.829752ms to restartPrimaryControlPlane
	I1228 06:56:17.683596  252331 kubeadm.go:403] duration metric: took 106.327386ms to StartCluster
	I1228 06:56:17.683615  252331 settings.go:142] acquiring lock: {Name:mk84c1d4c127eaf11c7cc5cc16de86f86f2dcbe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:17.683665  252331 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:56:17.686260  252331 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/kubeconfig: {Name:mke0b45a06b272ee5aa493b62cdbe6f2a53c0aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:17.686556  252331 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1228 06:56:17.686676  252331 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1228 06:56:17.686779  252331 addons.go:70] Setting storage-provisioner=true in profile "no-preload-950460"
	I1228 06:56:17.686790  252331 config.go:182] Loaded profile config "no-preload-950460": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:56:17.686794  252331 addons.go:239] Setting addon storage-provisioner=true in "no-preload-950460"
	W1228 06:56:17.686802  252331 addons.go:248] addon storage-provisioner should already be in state true
	I1228 06:56:17.686829  252331 host.go:66] Checking if "no-preload-950460" exists ...
	I1228 06:56:17.686834  252331 addons.go:70] Setting default-storageclass=true in profile "no-preload-950460"
	I1228 06:56:17.686838  252331 addons.go:70] Setting dashboard=true in profile "no-preload-950460"
	I1228 06:56:17.686865  252331 addons.go:239] Setting addon dashboard=true in "no-preload-950460"
	W1228 06:56:17.686879  252331 addons.go:248] addon dashboard should already be in state true
	I1228 06:56:17.686912  252331 host.go:66] Checking if "no-preload-950460" exists ...
	I1228 06:56:17.686847  252331 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-950460"
	I1228 06:56:17.687329  252331 cli_runner.go:164] Run: docker container inspect no-preload-950460 --format={{.State.Status}}
	I1228 06:56:17.687415  252331 cli_runner.go:164] Run: docker container inspect no-preload-950460 --format={{.State.Status}}
	I1228 06:56:17.687330  252331 cli_runner.go:164] Run: docker container inspect no-preload-950460 --format={{.State.Status}}
	I1228 06:56:17.689184  252331 out.go:179] * Verifying Kubernetes components...
	I1228 06:56:17.690310  252331 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:56:17.712805  252331 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1228 06:56:17.713229  252331 addons.go:239] Setting addon default-storageclass=true in "no-preload-950460"
	W1228 06:56:17.713248  252331 addons.go:248] addon default-storageclass should already be in state true
	I1228 06:56:17.713270  252331 host.go:66] Checking if "no-preload-950460" exists ...
	I1228 06:56:17.713562  252331 cli_runner.go:164] Run: docker container inspect no-preload-950460 --format={{.State.Status}}
	I1228 06:56:17.713731  252331 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1228 06:56:17.713774  252331 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:56:17.713791  252331 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1228 06:56:17.713835  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:17.715782  252331 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1228 06:56:15.089728  243963 node_ready.go:57] node "embed-certs-422591" has "Ready":"False" status (will retry)
	W1228 06:56:17.589238  243963 node_ready.go:57] node "embed-certs-422591" has "Ready":"False" status (will retry)
	I1228 06:56:17.716776  252331 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1228 06:56:17.716793  252331 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1228 06:56:17.716846  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:17.737306  252331 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1228 06:56:17.737329  252331 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1228 06:56:17.737387  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:17.747296  252331 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/no-preload-950460/id_rsa Username:docker}
	I1228 06:56:17.752550  252331 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/no-preload-950460/id_rsa Username:docker}
	I1228 06:56:17.763145  252331 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/no-preload-950460/id_rsa Username:docker}
	I1228 06:56:17.827637  252331 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:56:17.841176  252331 node_ready.go:35] waiting up to 6m0s for node "no-preload-950460" to be "Ready" ...
	I1228 06:56:17.852679  252331 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:56:17.859387  252331 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1228 06:56:17.859413  252331 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1228 06:56:17.870358  252331 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1228 06:56:17.876579  252331 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1228 06:56:17.876626  252331 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1228 06:56:17.892110  252331 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1228 06:56:17.892137  252331 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1228 06:56:17.907110  252331 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1228 06:56:17.907153  252331 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1228 06:56:17.921175  252331 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1228 06:56:17.921199  252331 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1228 06:56:17.934592  252331 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1228 06:56:17.934610  252331 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1228 06:56:17.946620  252331 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1228 06:56:17.946645  252331 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1228 06:56:17.958616  252331 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1228 06:56:17.958637  252331 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1228 06:56:17.971511  252331 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 06:56:17.971531  252331 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1228 06:56:17.984466  252331 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 06:56:19.111197  252331 node_ready.go:49] node "no-preload-950460" is "Ready"
	I1228 06:56:19.111234  252331 node_ready.go:38] duration metric: took 1.270013468s for node "no-preload-950460" to be "Ready" ...
	I1228 06:56:19.111250  252331 api_server.go:52] waiting for apiserver process to appear ...
	I1228 06:56:19.111303  252331 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 06:56:19.644061  252331 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.791326834s)
	I1228 06:56:19.644127  252331 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.773734972s)
	I1228 06:56:19.644217  252331 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.659719643s)
	I1228 06:56:19.644238  252331 api_server.go:72] duration metric: took 1.957648252s to wait for apiserver process to appear ...
	I1228 06:56:19.644247  252331 api_server.go:88] waiting for apiserver healthz status ...
	I1228 06:56:19.644265  252331 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1228 06:56:19.646079  252331 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-950460 addons enable metrics-server
	
	I1228 06:56:19.648689  252331 api_server.go:325] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1228 06:56:19.648710  252331 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1228 06:56:19.652919  252331 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1228 06:56:19.654055  252331 addons.go:530] duration metric: took 1.967385599s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	W1228 06:56:17.252978  242715 pod_ready.go:104] pod "coredns-5dd5756b68-f75js" is not "Ready", error: <nil>
	W1228 06:56:19.752632  242715 pod_ready.go:104] pod "coredns-5dd5756b68-f75js" is not "Ready", error: <nil>
	W1228 06:56:17.976710  247213 node_ready.go:57] node "default-k8s-diff-port-500581" has "Ready":"False" status (will retry)
	W1228 06:56:20.476521  247213 node_ready.go:57] node "default-k8s-diff-port-500581" has "Ready":"False" status (will retry)
	W1228 06:56:20.089066  243963 node_ready.go:57] node "embed-certs-422591" has "Ready":"False" status (will retry)
	W1228 06:56:22.089199  243963 node_ready.go:57] node "embed-certs-422591" has "Ready":"False" status (will retry)
	I1228 06:56:23.089137  243963 node_ready.go:49] node "embed-certs-422591" is "Ready"
	I1228 06:56:23.089171  243963 node_ready.go:38] duration metric: took 12.00330569s for node "embed-certs-422591" to be "Ready" ...
	I1228 06:56:23.089188  243963 api_server.go:52] waiting for apiserver process to appear ...
	I1228 06:56:23.089247  243963 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 06:56:23.109640  243963 api_server.go:72] duration metric: took 12.740459175s to wait for apiserver process to appear ...
	I1228 06:56:23.109670  243963 api_server.go:88] waiting for apiserver healthz status ...
	I1228 06:56:23.109691  243963 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1228 06:56:23.115347  243963 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1228 06:56:23.116388  243963 api_server.go:141] control plane version: v1.35.0
	I1228 06:56:23.116413  243963 api_server.go:131] duration metric: took 6.736322ms to wait for apiserver health ...
	I1228 06:56:23.116422  243963 system_pods.go:43] waiting for kube-system pods to appear ...
	I1228 06:56:23.120151  243963 system_pods.go:59] 8 kube-system pods found
	I1228 06:56:23.120183  243963 system_pods.go:61] "coredns-7d764666f9-dmhdv" [73a84260-cf19-47c9-a23e-616f99cb5f38] Pending
	I1228 06:56:23.120191  243963 system_pods.go:61] "etcd-embed-certs-422591" [fa26dd24-514f-4ada-b710-7e65e52ccd9e] Running
	I1228 06:56:23.120197  243963 system_pods.go:61] "kindnet-9zxtp" [e5bd678d-7d09-455f-993e-2fb6a4f02111] Running
	I1228 06:56:23.120217  243963 system_pods.go:61] "kube-apiserver-embed-certs-422591" [935ffe6b-7ba7-4580-b22b-b76cd5469ae8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 06:56:23.120229  243963 system_pods.go:61] "kube-controller-manager-embed-certs-422591" [39f6d823-b0f9-44c0-be26-47e85a60ca63] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 06:56:23.120236  243963 system_pods.go:61] "kube-proxy-j2dkd" [f64a0ce0-4a8f-4d7d-aac7-cce99ec2bdd8] Running
	I1228 06:56:23.120242  243963 system_pods.go:61] "kube-scheduler-embed-certs-422591" [ba6d5c13-ea17-4f1d-bcc9-c57e4e7ce995] Running
	I1228 06:56:23.120247  243963 system_pods.go:61] "storage-provisioner" [ac0163fe-8dd0-4650-a401-22a9a9310b5e] Pending
	I1228 06:56:23.120255  243963 system_pods.go:74] duration metric: took 3.827732ms to wait for pod list to return data ...
	I1228 06:56:23.120267  243963 default_sa.go:34] waiting for default service account to be created ...
	I1228 06:56:23.122455  243963 default_sa.go:45] found service account: "default"
	I1228 06:56:23.122484  243963 default_sa.go:55] duration metric: took 2.209324ms for default service account to be created ...
	I1228 06:56:23.122495  243963 system_pods.go:116] waiting for k8s-apps to be running ...
	I1228 06:56:23.125732  243963 system_pods.go:86] 8 kube-system pods found
	I1228 06:56:23.125761  243963 system_pods.go:89] "coredns-7d764666f9-dmhdv" [73a84260-cf19-47c9-a23e-616f99cb5f38] Pending
	I1228 06:56:23.125768  243963 system_pods.go:89] "etcd-embed-certs-422591" [fa26dd24-514f-4ada-b710-7e65e52ccd9e] Running
	I1228 06:56:23.125774  243963 system_pods.go:89] "kindnet-9zxtp" [e5bd678d-7d09-455f-993e-2fb6a4f02111] Running
	I1228 06:56:23.125782  243963 system_pods.go:89] "kube-apiserver-embed-certs-422591" [935ffe6b-7ba7-4580-b22b-b76cd5469ae8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 06:56:23.125798  243963 system_pods.go:89] "kube-controller-manager-embed-certs-422591" [39f6d823-b0f9-44c0-be26-47e85a60ca63] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 06:56:23.125806  243963 system_pods.go:89] "kube-proxy-j2dkd" [f64a0ce0-4a8f-4d7d-aac7-cce99ec2bdd8] Running
	I1228 06:56:23.125812  243963 system_pods.go:89] "kube-scheduler-embed-certs-422591" [ba6d5c13-ea17-4f1d-bcc9-c57e4e7ce995] Running
	I1228 06:56:23.125821  243963 system_pods.go:89] "storage-provisioner" [ac0163fe-8dd0-4650-a401-22a9a9310b5e] Pending
	I1228 06:56:23.125858  243963 retry.go:84] will retry after 300ms: missing components: kube-dns
	I1228 06:56:23.380969  243963 system_pods.go:86] 8 kube-system pods found
	I1228 06:56:23.381005  243963 system_pods.go:89] "coredns-7d764666f9-dmhdv" [73a84260-cf19-47c9-a23e-616f99cb5f38] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:56:23.381014  243963 system_pods.go:89] "etcd-embed-certs-422591" [fa26dd24-514f-4ada-b710-7e65e52ccd9e] Running
	I1228 06:56:23.381023  243963 system_pods.go:89] "kindnet-9zxtp" [e5bd678d-7d09-455f-993e-2fb6a4f02111] Running
	I1228 06:56:23.381042  243963 system_pods.go:89] "kube-apiserver-embed-certs-422591" [935ffe6b-7ba7-4580-b22b-b76cd5469ae8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 06:56:23.381051  243963 system_pods.go:89] "kube-controller-manager-embed-certs-422591" [39f6d823-b0f9-44c0-be26-47e85a60ca63] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 06:56:23.381057  243963 system_pods.go:89] "kube-proxy-j2dkd" [f64a0ce0-4a8f-4d7d-aac7-cce99ec2bdd8] Running
	I1228 06:56:23.381067  243963 system_pods.go:89] "kube-scheduler-embed-certs-422591" [ba6d5c13-ea17-4f1d-bcc9-c57e4e7ce995] Running
	I1228 06:56:23.381075  243963 system_pods.go:89] "storage-provisioner" [ac0163fe-8dd0-4650-a401-22a9a9310b5e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:56:23.736873  243963 system_pods.go:86] 8 kube-system pods found
	I1228 06:56:23.736924  243963 system_pods.go:89] "coredns-7d764666f9-dmhdv" [73a84260-cf19-47c9-a23e-616f99cb5f38] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:56:23.736933  243963 system_pods.go:89] "etcd-embed-certs-422591" [fa26dd24-514f-4ada-b710-7e65e52ccd9e] Running
	I1228 06:56:23.736942  243963 system_pods.go:89] "kindnet-9zxtp" [e5bd678d-7d09-455f-993e-2fb6a4f02111] Running
	I1228 06:56:23.736955  243963 system_pods.go:89] "kube-apiserver-embed-certs-422591" [935ffe6b-7ba7-4580-b22b-b76cd5469ae8] Running
	I1228 06:56:23.736965  243963 system_pods.go:89] "kube-controller-manager-embed-certs-422591" [39f6d823-b0f9-44c0-be26-47e85a60ca63] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 06:56:23.736971  243963 system_pods.go:89] "kube-proxy-j2dkd" [f64a0ce0-4a8f-4d7d-aac7-cce99ec2bdd8] Running
	I1228 06:56:23.736990  243963 system_pods.go:89] "kube-scheduler-embed-certs-422591" [ba6d5c13-ea17-4f1d-bcc9-c57e4e7ce995] Running
	I1228 06:56:23.737002  243963 system_pods.go:89] "storage-provisioner" [ac0163fe-8dd0-4650-a401-22a9a9310b5e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:56:24.078656  243963 system_pods.go:86] 8 kube-system pods found
	I1228 06:56:24.078690  243963 system_pods.go:89] "coredns-7d764666f9-dmhdv" [73a84260-cf19-47c9-a23e-616f99cb5f38] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:56:24.078696  243963 system_pods.go:89] "etcd-embed-certs-422591" [fa26dd24-514f-4ada-b710-7e65e52ccd9e] Running
	I1228 06:56:24.078700  243963 system_pods.go:89] "kindnet-9zxtp" [e5bd678d-7d09-455f-993e-2fb6a4f02111] Running
	I1228 06:56:24.078704  243963 system_pods.go:89] "kube-apiserver-embed-certs-422591" [935ffe6b-7ba7-4580-b22b-b76cd5469ae8] Running
	I1228 06:56:24.078709  243963 system_pods.go:89] "kube-controller-manager-embed-certs-422591" [39f6d823-b0f9-44c0-be26-47e85a60ca63] Running
	I1228 06:56:24.078712  243963 system_pods.go:89] "kube-proxy-j2dkd" [f64a0ce0-4a8f-4d7d-aac7-cce99ec2bdd8] Running
	I1228 06:56:24.078715  243963 system_pods.go:89] "kube-scheduler-embed-certs-422591" [ba6d5c13-ea17-4f1d-bcc9-c57e4e7ce995] Running
	I1228 06:56:24.078721  243963 system_pods.go:89] "storage-provisioner" [ac0163fe-8dd0-4650-a401-22a9a9310b5e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:56:20.144322  252331 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1228 06:56:20.148700  252331 api_server.go:325] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1228 06:56:20.148728  252331 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1228 06:56:20.644327  252331 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1228 06:56:20.648377  252331 api_server.go:325] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1228 06:56:20.649429  252331 api_server.go:141] control plane version: v1.35.0
	I1228 06:56:20.649449  252331 api_server.go:131] duration metric: took 1.005195846s to wait for apiserver health ...
	I1228 06:56:20.649458  252331 system_pods.go:43] waiting for kube-system pods to appear ...
	I1228 06:56:20.652593  252331 system_pods.go:59] 8 kube-system pods found
	I1228 06:56:20.652630  252331 system_pods.go:61] "coredns-7d764666f9-npk6g" [a3cc436b-e460-483e-99aa-f7d44599d666] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:56:20.652637  252331 system_pods.go:61] "etcd-no-preload-950460" [61fd908c-4329-4432-82b2-80206bbbb703] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 06:56:20.652644  252331 system_pods.go:61] "kindnet-xhb7x" [4bab0d9b-3499-4546-bb8c-e47bfc17dbbf] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 06:56:20.652653  252331 system_pods.go:61] "kube-apiserver-no-preload-950460" [2aeafb60-9003-44c3-b5cb-960dd4a668c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 06:56:20.652667  252331 system_pods.go:61] "kube-controller-manager-no-preload-950460" [b38f2ea3-71b8-45e0-9c27-eb7fddfc67a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 06:56:20.652675  252331 system_pods.go:61] "kube-proxy-294rn" [c88bb406-588c-45ec-9225-946af7327ec0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 06:56:20.652686  252331 system_pods.go:61] "kube-scheduler-no-preload-950460" [24b95531-e1d2-47ff-abd3-70d0cdab9fe4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 06:56:20.652694  252331 system_pods.go:61] "storage-provisioner" [a4076523-c034-4331-8dd7-a506e9dec2d9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:56:20.652703  252331 system_pods.go:74] duration metric: took 3.239436ms to wait for pod list to return data ...
	I1228 06:56:20.652715  252331 default_sa.go:34] waiting for default service account to be created ...
	I1228 06:56:20.654840  252331 default_sa.go:45] found service account: "default"
	I1228 06:56:20.654856  252331 default_sa.go:55] duration metric: took 2.135398ms for default service account to be created ...
	I1228 06:56:20.654863  252331 system_pods.go:116] waiting for k8s-apps to be running ...
	I1228 06:56:20.656911  252331 system_pods.go:86] 8 kube-system pods found
	I1228 06:56:20.656935  252331 system_pods.go:89] "coredns-7d764666f9-npk6g" [a3cc436b-e460-483e-99aa-f7d44599d666] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:56:20.656943  252331 system_pods.go:89] "etcd-no-preload-950460" [61fd908c-4329-4432-82b2-80206bbbb703] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 06:56:20.656950  252331 system_pods.go:89] "kindnet-xhb7x" [4bab0d9b-3499-4546-bb8c-e47bfc17dbbf] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 06:56:20.656955  252331 system_pods.go:89] "kube-apiserver-no-preload-950460" [2aeafb60-9003-44c3-b5cb-960dd4a668c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 06:56:20.656961  252331 system_pods.go:89] "kube-controller-manager-no-preload-950460" [b38f2ea3-71b8-45e0-9c27-eb7fddfc67a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 06:56:20.656969  252331 system_pods.go:89] "kube-proxy-294rn" [c88bb406-588c-45ec-9225-946af7327ec0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 06:56:20.656974  252331 system_pods.go:89] "kube-scheduler-no-preload-950460" [24b95531-e1d2-47ff-abd3-70d0cdab9fe4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 06:56:20.656979  252331 system_pods.go:89] "storage-provisioner" [a4076523-c034-4331-8dd7-a506e9dec2d9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:56:20.656988  252331 system_pods.go:126] duration metric: took 2.120486ms to wait for k8s-apps to be running ...
	I1228 06:56:20.656995  252331 system_svc.go:44] waiting for kubelet service to be running ....
	I1228 06:56:20.657051  252331 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:56:20.671024  252331 system_svc.go:56] duration metric: took 14.023192ms WaitForService to wait for kubelet
	I1228 06:56:20.671072  252331 kubeadm.go:587] duration metric: took 2.984480725s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 06:56:20.671093  252331 node_conditions.go:102] verifying NodePressure condition ...
	I1228 06:56:20.673706  252331 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1228 06:56:20.673727  252331 node_conditions.go:123] node cpu capacity is 8
	I1228 06:56:20.673740  252331 node_conditions.go:105] duration metric: took 2.643602ms to run NodePressure ...
	I1228 06:56:20.673752  252331 start.go:242] waiting for startup goroutines ...
	I1228 06:56:20.673758  252331 start.go:247] waiting for cluster config update ...
	I1228 06:56:20.673773  252331 start.go:256] writing updated cluster config ...
	I1228 06:56:20.674067  252331 ssh_runner.go:195] Run: rm -f paused
	I1228 06:56:20.677778  252331 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 06:56:20.681121  252331 pod_ready.go:83] waiting for pod "coredns-7d764666f9-npk6g" in "kube-system" namespace to be "Ready" or be gone ...
	W1228 06:56:22.686104  252331 pod_ready.go:104] pod "coredns-7d764666f9-npk6g" is not "Ready", error: <nil>
	W1228 06:56:22.251764  242715 pod_ready.go:104] pod "coredns-5dd5756b68-f75js" is not "Ready", error: <nil>
	W1228 06:56:24.253072  242715 pod_ready.go:104] pod "coredns-5dd5756b68-f75js" is not "Ready", error: <nil>
	I1228 06:56:24.497471  243963 system_pods.go:86] 8 kube-system pods found
	I1228 06:56:24.497502  243963 system_pods.go:89] "coredns-7d764666f9-dmhdv" [73a84260-cf19-47c9-a23e-616f99cb5f38] Running
	I1228 06:56:24.497510  243963 system_pods.go:89] "etcd-embed-certs-422591" [fa26dd24-514f-4ada-b710-7e65e52ccd9e] Running
	I1228 06:56:24.497516  243963 system_pods.go:89] "kindnet-9zxtp" [e5bd678d-7d09-455f-993e-2fb6a4f02111] Running
	I1228 06:56:24.497521  243963 system_pods.go:89] "kube-apiserver-embed-certs-422591" [935ffe6b-7ba7-4580-b22b-b76cd5469ae8] Running
	I1228 06:56:24.497528  243963 system_pods.go:89] "kube-controller-manager-embed-certs-422591" [39f6d823-b0f9-44c0-be26-47e85a60ca63] Running
	I1228 06:56:24.497533  243963 system_pods.go:89] "kube-proxy-j2dkd" [f64a0ce0-4a8f-4d7d-aac7-cce99ec2bdd8] Running
	I1228 06:56:24.497539  243963 system_pods.go:89] "kube-scheduler-embed-certs-422591" [ba6d5c13-ea17-4f1d-bcc9-c57e4e7ce995] Running
	I1228 06:56:24.497545  243963 system_pods.go:89] "storage-provisioner" [ac0163fe-8dd0-4650-a401-22a9a9310b5e] Running
	I1228 06:56:24.497556  243963 system_pods.go:126] duration metric: took 1.375053604s to wait for k8s-apps to be running ...
	I1228 06:56:24.497578  243963 system_svc.go:44] waiting for kubelet service to be running ....
	I1228 06:56:24.497628  243963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:56:24.514567  243963 system_svc.go:56] duration metric: took 16.979492ms WaitForService to wait for kubelet
	I1228 06:56:24.514605  243963 kubeadm.go:587] duration metric: took 14.145429952s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 06:56:24.514629  243963 node_conditions.go:102] verifying NodePressure condition ...
	I1228 06:56:24.518108  243963 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1228 06:56:24.518140  243963 node_conditions.go:123] node cpu capacity is 8
	I1228 06:56:24.518158  243963 node_conditions.go:105] duration metric: took 3.522325ms to run NodePressure ...
	I1228 06:56:24.518177  243963 start.go:242] waiting for startup goroutines ...
	I1228 06:56:24.518186  243963 start.go:247] waiting for cluster config update ...
	I1228 06:56:24.518200  243963 start.go:256] writing updated cluster config ...
	I1228 06:56:24.518505  243963 ssh_runner.go:195] Run: rm -f paused
	I1228 06:56:24.523480  243963 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 06:56:24.528339  243963 pod_ready.go:83] waiting for pod "coredns-7d764666f9-dmhdv" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:24.533314  243963 pod_ready.go:94] pod "coredns-7d764666f9-dmhdv" is "Ready"
	I1228 06:56:24.533340  243963 pod_ready.go:86] duration metric: took 4.973959ms for pod "coredns-7d764666f9-dmhdv" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:24.535652  243963 pod_ready.go:83] waiting for pod "etcd-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:24.540088  243963 pod_ready.go:94] pod "etcd-embed-certs-422591" is "Ready"
	I1228 06:56:24.540118  243963 pod_ready.go:86] duration metric: took 4.440493ms for pod "etcd-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:24.542361  243963 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:24.546378  243963 pod_ready.go:94] pod "kube-apiserver-embed-certs-422591" is "Ready"
	I1228 06:56:24.546401  243963 pod_ready.go:86] duration metric: took 4.016397ms for pod "kube-apiserver-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:24.548746  243963 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:24.928795  243963 pod_ready.go:94] pod "kube-controller-manager-embed-certs-422591" is "Ready"
	I1228 06:56:24.928827  243963 pod_ready.go:86] duration metric: took 380.060187ms for pod "kube-controller-manager-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:25.129424  243963 pod_ready.go:83] waiting for pod "kube-proxy-j2dkd" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:25.528796  243963 pod_ready.go:94] pod "kube-proxy-j2dkd" is "Ready"
	I1228 06:56:25.528829  243963 pod_ready.go:86] duration metric: took 399.379664ms for pod "kube-proxy-j2dkd" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:25.728149  243963 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:26.129240  243963 pod_ready.go:94] pod "kube-scheduler-embed-certs-422591" is "Ready"
	I1228 06:56:26.129352  243963 pod_ready.go:86] duration metric: took 401.16633ms for pod "kube-scheduler-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:26.129383  243963 pod_ready.go:40] duration metric: took 1.605872095s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 06:56:26.195003  243963 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1228 06:56:26.196497  243963 out.go:179] * Done! kubectl is now configured to use "embed-certs-422591" cluster and "default" namespace by default
	W1228 06:56:22.478649  247213 node_ready.go:57] node "default-k8s-diff-port-500581" has "Ready":"False" status (will retry)
	W1228 06:56:24.977721  247213 node_ready.go:57] node "default-k8s-diff-port-500581" has "Ready":"False" status (will retry)
	I1228 06:56:26.478547  247213 node_ready.go:49] node "default-k8s-diff-port-500581" is "Ready"
	I1228 06:56:26.478581  247213 node_ready.go:38] duration metric: took 12.504894114s for node "default-k8s-diff-port-500581" to be "Ready" ...
	I1228 06:56:26.478597  247213 api_server.go:52] waiting for apiserver process to appear ...
	I1228 06:56:26.478645  247213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 06:56:26.500009  247213 api_server.go:72] duration metric: took 12.844753456s to wait for apiserver process to appear ...
	I1228 06:56:26.500069  247213 api_server.go:88] waiting for apiserver healthz status ...
	I1228 06:56:26.500092  247213 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1228 06:56:26.505791  247213 api_server.go:325] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1228 06:56:26.506819  247213 api_server.go:141] control plane version: v1.35.0
	I1228 06:56:26.506850  247213 api_server.go:131] duration metric: took 6.772745ms to wait for apiserver health ...
	I1228 06:56:26.506860  247213 system_pods.go:43] waiting for kube-system pods to appear ...
	I1228 06:56:26.511152  247213 system_pods.go:59] 8 kube-system pods found
	I1228 06:56:26.511188  247213 system_pods.go:61] "coredns-7d764666f9-9glh9" [9c46cd6f-643c-4dcd-9ffd-88becb063b24] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:56:26.511196  247213 system_pods.go:61] "etcd-default-k8s-diff-port-500581" [03cb3e5a-cdc5-4c45-983f-05b0f5326abe] Running
	I1228 06:56:26.511210  247213 system_pods.go:61] "kindnet-lsrww" [b5bd3c1b-325d-46fe-9378-779822f0ba5b] Running
	I1228 06:56:26.511217  247213 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-500581" [8a2ed107-3bb0-4c8c-bdff-d7011ae71058] Running
	I1228 06:56:26.511223  247213 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-500581" [933ddecb-a0d9-44c2-abb5-d7629a8107eb] Running
	I1228 06:56:26.511228  247213 system_pods.go:61] "kube-proxy-95gmh" [f25e4b21-a201-4838-b7c9-a5fde3304662] Running
	I1228 06:56:26.511237  247213 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-500581" [35caf688-1337-46ff-b025-5d59373ae8e4] Running
	I1228 06:56:26.511245  247213 system_pods.go:61] "storage-provisioner" [12b3784f-bffe-49c1-8915-2011c07bee4e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:56:26.511257  247213 system_pods.go:74] duration metric: took 4.390309ms to wait for pod list to return data ...
	I1228 06:56:26.511272  247213 default_sa.go:34] waiting for default service account to be created ...
	I1228 06:56:26.516259  247213 default_sa.go:45] found service account: "default"
	I1228 06:56:26.516290  247213 default_sa.go:55] duration metric: took 5.010014ms for default service account to be created ...
	I1228 06:56:26.516302  247213 system_pods.go:116] waiting for k8s-apps to be running ...
	I1228 06:56:26.522640  247213 system_pods.go:86] 8 kube-system pods found
	I1228 06:56:26.522682  247213 system_pods.go:89] "coredns-7d764666f9-9glh9" [9c46cd6f-643c-4dcd-9ffd-88becb063b24] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:56:26.522692  247213 system_pods.go:89] "etcd-default-k8s-diff-port-500581" [03cb3e5a-cdc5-4c45-983f-05b0f5326abe] Running
	I1228 06:56:26.522701  247213 system_pods.go:89] "kindnet-lsrww" [b5bd3c1b-325d-46fe-9378-779822f0ba5b] Running
	I1228 06:56:26.522706  247213 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-500581" [8a2ed107-3bb0-4c8c-bdff-d7011ae71058] Running
	I1228 06:56:26.522712  247213 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-500581" [933ddecb-a0d9-44c2-abb5-d7629a8107eb] Running
	I1228 06:56:26.522718  247213 system_pods.go:89] "kube-proxy-95gmh" [f25e4b21-a201-4838-b7c9-a5fde3304662] Running
	I1228 06:56:26.522725  247213 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-500581" [35caf688-1337-46ff-b025-5d59373ae8e4] Running
	I1228 06:56:26.522732  247213 system_pods.go:89] "storage-provisioner" [12b3784f-bffe-49c1-8915-2011c07bee4e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:56:26.522761  247213 retry.go:84] will retry after 200ms: missing components: kube-dns
	I1228 06:56:26.727648  247213 system_pods.go:86] 8 kube-system pods found
	I1228 06:56:26.727695  247213 system_pods.go:89] "coredns-7d764666f9-9glh9" [9c46cd6f-643c-4dcd-9ffd-88becb063b24] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:56:26.727705  247213 system_pods.go:89] "etcd-default-k8s-diff-port-500581" [03cb3e5a-cdc5-4c45-983f-05b0f5326abe] Running
	I1228 06:56:26.727714  247213 system_pods.go:89] "kindnet-lsrww" [b5bd3c1b-325d-46fe-9378-779822f0ba5b] Running
	I1228 06:56:26.727719  247213 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-500581" [8a2ed107-3bb0-4c8c-bdff-d7011ae71058] Running
	I1228 06:56:26.727726  247213 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-500581" [933ddecb-a0d9-44c2-abb5-d7629a8107eb] Running
	I1228 06:56:26.727733  247213 system_pods.go:89] "kube-proxy-95gmh" [f25e4b21-a201-4838-b7c9-a5fde3304662] Running
	I1228 06:56:26.727739  247213 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-500581" [35caf688-1337-46ff-b025-5d59373ae8e4] Running
	I1228 06:56:26.727753  247213 system_pods.go:89] "storage-provisioner" [12b3784f-bffe-49c1-8915-2011c07bee4e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:56:27.048953  247213 system_pods.go:86] 8 kube-system pods found
	I1228 06:56:27.048983  247213 system_pods.go:89] "coredns-7d764666f9-9glh9" [9c46cd6f-643c-4dcd-9ffd-88becb063b24] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:56:27.048988  247213 system_pods.go:89] "etcd-default-k8s-diff-port-500581" [03cb3e5a-cdc5-4c45-983f-05b0f5326abe] Running
	I1228 06:56:27.048995  247213 system_pods.go:89] "kindnet-lsrww" [b5bd3c1b-325d-46fe-9378-779822f0ba5b] Running
	I1228 06:56:27.048999  247213 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-500581" [8a2ed107-3bb0-4c8c-bdff-d7011ae71058] Running
	I1228 06:56:27.049002  247213 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-500581" [933ddecb-a0d9-44c2-abb5-d7629a8107eb] Running
	I1228 06:56:27.049006  247213 system_pods.go:89] "kube-proxy-95gmh" [f25e4b21-a201-4838-b7c9-a5fde3304662] Running
	I1228 06:56:27.049012  247213 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-500581" [35caf688-1337-46ff-b025-5d59373ae8e4] Running
	I1228 06:56:27.049019  247213 system_pods.go:89] "storage-provisioner" [12b3784f-bffe-49c1-8915-2011c07bee4e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:56:27.347697  247213 system_pods.go:86] 8 kube-system pods found
	I1228 06:56:27.347744  247213 system_pods.go:89] "coredns-7d764666f9-9glh9" [9c46cd6f-643c-4dcd-9ffd-88becb063b24] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:56:27.347753  247213 system_pods.go:89] "etcd-default-k8s-diff-port-500581" [03cb3e5a-cdc5-4c45-983f-05b0f5326abe] Running
	I1228 06:56:27.347761  247213 system_pods.go:89] "kindnet-lsrww" [b5bd3c1b-325d-46fe-9378-779822f0ba5b] Running
	I1228 06:56:27.347767  247213 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-500581" [8a2ed107-3bb0-4c8c-bdff-d7011ae71058] Running
	I1228 06:56:27.347773  247213 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-500581" [933ddecb-a0d9-44c2-abb5-d7629a8107eb] Running
	I1228 06:56:27.347779  247213 system_pods.go:89] "kube-proxy-95gmh" [f25e4b21-a201-4838-b7c9-a5fde3304662] Running
	I1228 06:56:27.347784  247213 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-500581" [35caf688-1337-46ff-b025-5d59373ae8e4] Running
	I1228 06:56:27.347792  247213 system_pods.go:89] "storage-provisioner" [12b3784f-bffe-49c1-8915-2011c07bee4e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:56:27.894612  247213 system_pods.go:86] 8 kube-system pods found
	I1228 06:56:27.894645  247213 system_pods.go:89] "coredns-7d764666f9-9glh9" [9c46cd6f-643c-4dcd-9ffd-88becb063b24] Running
	I1228 06:56:27.894654  247213 system_pods.go:89] "etcd-default-k8s-diff-port-500581" [03cb3e5a-cdc5-4c45-983f-05b0f5326abe] Running
	I1228 06:56:27.894661  247213 system_pods.go:89] "kindnet-lsrww" [b5bd3c1b-325d-46fe-9378-779822f0ba5b] Running
	I1228 06:56:27.894668  247213 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-500581" [8a2ed107-3bb0-4c8c-bdff-d7011ae71058] Running
	I1228 06:56:27.894674  247213 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-500581" [933ddecb-a0d9-44c2-abb5-d7629a8107eb] Running
	I1228 06:56:27.894747  247213 system_pods.go:89] "kube-proxy-95gmh" [f25e4b21-a201-4838-b7c9-a5fde3304662] Running
	I1228 06:56:27.894780  247213 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-500581" [35caf688-1337-46ff-b025-5d59373ae8e4] Running
	I1228 06:56:27.894786  247213 system_pods.go:89] "storage-provisioner" [12b3784f-bffe-49c1-8915-2011c07bee4e] Running
	I1228 06:56:27.894796  247213 system_pods.go:126] duration metric: took 1.378485807s to wait for k8s-apps to be running ...
	I1228 06:56:27.894807  247213 system_svc.go:44] waiting for kubelet service to be running ....
	I1228 06:56:27.894877  247213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:56:27.913725  247213 system_svc.go:56] duration metric: took 18.908162ms WaitForService to wait for kubelet
	I1228 06:56:27.913765  247213 kubeadm.go:587] duration metric: took 14.258529006s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 06:56:27.913788  247213 node_conditions.go:102] verifying NodePressure condition ...
	I1228 06:56:27.917024  247213 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1228 06:56:27.917082  247213 node_conditions.go:123] node cpu capacity is 8
	I1228 06:56:27.917101  247213 node_conditions.go:105] duration metric: took 3.307449ms to run NodePressure ...
	I1228 06:56:27.917117  247213 start.go:242] waiting for startup goroutines ...
	I1228 06:56:27.917128  247213 start.go:247] waiting for cluster config update ...
	I1228 06:56:27.917147  247213 start.go:256] writing updated cluster config ...
	I1228 06:56:27.917432  247213 ssh_runner.go:195] Run: rm -f paused
	I1228 06:56:27.922292  247213 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 06:56:27.928675  247213 pod_ready.go:83] waiting for pod "coredns-7d764666f9-9glh9" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:27.933976  247213 pod_ready.go:94] pod "coredns-7d764666f9-9glh9" is "Ready"
	I1228 06:56:27.934000  247213 pod_ready.go:86] duration metric: took 5.293782ms for pod "coredns-7d764666f9-9glh9" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:27.952822  247213 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:27.957941  247213 pod_ready.go:94] pod "etcd-default-k8s-diff-port-500581" is "Ready"
	I1228 06:56:27.957969  247213 pod_ready.go:86] duration metric: took 5.117578ms for pod "etcd-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:27.960256  247213 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:27.964517  247213 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-500581" is "Ready"
	I1228 06:56:27.964541  247213 pod_ready.go:86] duration metric: took 4.26155ms for pod "kube-apiserver-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:27.966612  247213 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:28.326675  247213 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-500581" is "Ready"
	I1228 06:56:28.326711  247213 pod_ready.go:86] duration metric: took 360.070556ms for pod "kube-controller-manager-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:28.527492  247213 pod_ready.go:83] waiting for pod "kube-proxy-95gmh" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:28.926562  247213 pod_ready.go:94] pod "kube-proxy-95gmh" is "Ready"
	I1228 06:56:28.926586  247213 pod_ready.go:86] duration metric: took 398.654778ms for pod "kube-proxy-95gmh" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:29.128257  247213 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:29.527347  247213 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-500581" is "Ready"
	I1228 06:56:29.527373  247213 pod_ready.go:86] duration metric: took 399.091542ms for pod "kube-scheduler-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:29.527384  247213 pod_ready.go:40] duration metric: took 1.605062412s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 06:56:29.572470  247213 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1228 06:56:29.574045  247213 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-500581" cluster and "default" namespace by default
	W1228 06:56:24.687607  252331 pod_ready.go:104] pod "coredns-7d764666f9-npk6g" is not "Ready", error: <nil>
	W1228 06:56:27.187235  252331 pod_ready.go:104] pod "coredns-7d764666f9-npk6g" is not "Ready", error: <nil>
	W1228 06:56:26.754423  242715 pod_ready.go:104] pod "coredns-5dd5756b68-f75js" is not "Ready", error: <nil>
	I1228 06:56:28.252283  242715 pod_ready.go:94] pod "coredns-5dd5756b68-f75js" is "Ready"
	I1228 06:56:28.252312  242715 pod_ready.go:86] duration metric: took 34.005583819s for pod "coredns-5dd5756b68-f75js" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:28.255219  242715 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-694122" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:28.259146  242715 pod_ready.go:94] pod "etcd-old-k8s-version-694122" is "Ready"
	I1228 06:56:28.259168  242715 pod_ready.go:86] duration metric: took 3.930339ms for pod "etcd-old-k8s-version-694122" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:28.261639  242715 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-694122" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:28.265232  242715 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-694122" is "Ready"
	I1228 06:56:28.265251  242715 pod_ready.go:86] duration metric: took 3.589847ms for pod "kube-apiserver-old-k8s-version-694122" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:28.267802  242715 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-694122" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:28.450233  242715 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-694122" is "Ready"
	I1228 06:56:28.450266  242715 pod_ready.go:86] duration metric: took 182.442698ms for pod "kube-controller-manager-old-k8s-version-694122" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:28.651005  242715 pod_ready.go:83] waiting for pod "kube-proxy-ckjcc" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:29.050020  242715 pod_ready.go:94] pod "kube-proxy-ckjcc" is "Ready"
	I1228 06:56:29.050071  242715 pod_ready.go:86] duration metric: took 399.008645ms for pod "kube-proxy-ckjcc" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:29.250805  242715 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-694122" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:29.650219  242715 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-694122" is "Ready"
	I1228 06:56:29.650260  242715 pod_ready.go:86] duration metric: took 399.415539ms for pod "kube-scheduler-old-k8s-version-694122" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:29.650277  242715 pod_ready.go:40] duration metric: took 35.408765036s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 06:56:29.699567  242715 start.go:625] kubectl: 1.35.0, cluster: 1.28.0 (minor skew: 7)
	I1228 06:56:29.701172  242715 out.go:203] 
	W1228 06:56:29.702316  242715 out.go:285] ! /usr/local/bin/kubectl is version 1.35.0, which may have incompatibilities with Kubernetes 1.28.0.
	I1228 06:56:29.703412  242715 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1228 06:56:29.704563  242715 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-694122" cluster and "default" namespace by default
	W1228 06:56:29.687654  252331 pod_ready.go:104] pod "coredns-7d764666f9-npk6g" is not "Ready", error: <nil>
	W1228 06:56:32.186292  252331 pod_ready.go:104] pod "coredns-7d764666f9-npk6g" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 28 06:56:23 embed-certs-422591 crio[774]: time="2025-12-28T06:56:23.484764748Z" level=info msg="Starting container: 48a54b0c1962c73a84b17c0371c52fe899e5f85dece6687469fbbe88db19f6fe" id=66ee608c-afc7-49ca-9843-07bd05e9047d name=/runtime.v1.RuntimeService/StartContainer
	Dec 28 06:56:23 embed-certs-422591 crio[774]: time="2025-12-28T06:56:23.487446806Z" level=info msg="Started container" PID=1890 containerID=48a54b0c1962c73a84b17c0371c52fe899e5f85dece6687469fbbe88db19f6fe description=kube-system/coredns-7d764666f9-dmhdv/coredns id=66ee608c-afc7-49ca-9843-07bd05e9047d name=/runtime.v1.RuntimeService/StartContainer sandboxID=d3223b9839595dcd11f11bb883b1e1fe448b3594b6a42a560e9c4fed0853bf47
	Dec 28 06:56:26 embed-certs-422591 crio[774]: time="2025-12-28T06:56:26.690882709Z" level=info msg="Running pod sandbox: default/busybox/POD" id=53b70748-f46e-4a3a-a3c1-77d3589f6806 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 28 06:56:26 embed-certs-422591 crio[774]: time="2025-12-28T06:56:26.690978702Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:56:26 embed-certs-422591 crio[774]: time="2025-12-28T06:56:26.696925142Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:5409cff3726d073de11fe06b2171032cd70762a88e17267a6cd7effc232bd380 UID:b72d2a4e-49f9-4dfb-bdc6-5dce7700bbe3 NetNS:/var/run/netns/28d8fbf5-0c41-49d6-b270-d2e8c71ab02b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00092a9d0}] Aliases:map[]}"
	Dec 28 06:56:26 embed-certs-422591 crio[774]: time="2025-12-28T06:56:26.696956755Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 28 06:56:26 embed-certs-422591 crio[774]: time="2025-12-28T06:56:26.717386078Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:5409cff3726d073de11fe06b2171032cd70762a88e17267a6cd7effc232bd380 UID:b72d2a4e-49f9-4dfb-bdc6-5dce7700bbe3 NetNS:/var/run/netns/28d8fbf5-0c41-49d6-b270-d2e8c71ab02b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00092a9d0}] Aliases:map[]}"
	Dec 28 06:56:26 embed-certs-422591 crio[774]: time="2025-12-28T06:56:26.717562683Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 28 06:56:26 embed-certs-422591 crio[774]: time="2025-12-28T06:56:26.718747431Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 28 06:56:26 embed-certs-422591 crio[774]: time="2025-12-28T06:56:26.719915115Z" level=info msg="Ran pod sandbox 5409cff3726d073de11fe06b2171032cd70762a88e17267a6cd7effc232bd380 with infra container: default/busybox/POD" id=53b70748-f46e-4a3a-a3c1-77d3589f6806 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 28 06:56:26 embed-certs-422591 crio[774]: time="2025-12-28T06:56:26.721402912Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5cff0aec-2762-49a5-a6b5-6b965281e5ba name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:56:26 embed-certs-422591 crio[774]: time="2025-12-28T06:56:26.721544105Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=5cff0aec-2762-49a5-a6b5-6b965281e5ba name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:56:26 embed-certs-422591 crio[774]: time="2025-12-28T06:56:26.721665343Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=5cff0aec-2762-49a5-a6b5-6b965281e5ba name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:56:26 embed-certs-422591 crio[774]: time="2025-12-28T06:56:26.722540041Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1295cf14-2966-48c8-b4e8-d115ce440885 name=/runtime.v1.ImageService/PullImage
	Dec 28 06:56:26 embed-certs-422591 crio[774]: time="2025-12-28T06:56:26.722880472Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 28 06:56:28 embed-certs-422591 crio[774]: time="2025-12-28T06:56:28.513197903Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=1295cf14-2966-48c8-b4e8-d115ce440885 name=/runtime.v1.ImageService/PullImage
	Dec 28 06:56:28 embed-certs-422591 crio[774]: time="2025-12-28T06:56:28.513882868Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=30bdd5a9-01a3-4a86-b40e-a606e39e2f6b name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:56:28 embed-certs-422591 crio[774]: time="2025-12-28T06:56:28.515824109Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f75723d1-6c43-41c1-92fd-87b5a596a53b name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:56:28 embed-certs-422591 crio[774]: time="2025-12-28T06:56:28.519474218Z" level=info msg="Creating container: default/busybox/busybox" id=f35e590d-5cc4-43aa-b7e8-8596721df2a3 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:56:28 embed-certs-422591 crio[774]: time="2025-12-28T06:56:28.519627205Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:56:28 embed-certs-422591 crio[774]: time="2025-12-28T06:56:28.524396583Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:56:28 embed-certs-422591 crio[774]: time="2025-12-28T06:56:28.524918594Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:56:28 embed-certs-422591 crio[774]: time="2025-12-28T06:56:28.559487437Z" level=info msg="Created container d56cfcefbc60694a4e577ca764a1bd632525492ec2fa2b2f833b9189b39c61b9: default/busybox/busybox" id=f35e590d-5cc4-43aa-b7e8-8596721df2a3 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:56:28 embed-certs-422591 crio[774]: time="2025-12-28T06:56:28.560454278Z" level=info msg="Starting container: d56cfcefbc60694a4e577ca764a1bd632525492ec2fa2b2f833b9189b39c61b9" id=30715608-c929-4796-acee-9a4ef7c3b96d name=/runtime.v1.RuntimeService/StartContainer
	Dec 28 06:56:28 embed-certs-422591 crio[774]: time="2025-12-28T06:56:28.562531758Z" level=info msg="Started container" PID=1970 containerID=d56cfcefbc60694a4e577ca764a1bd632525492ec2fa2b2f833b9189b39c61b9 description=default/busybox/busybox id=30715608-c929-4796-acee-9a4ef7c3b96d name=/runtime.v1.RuntimeService/StartContainer sandboxID=5409cff3726d073de11fe06b2171032cd70762a88e17267a6cd7effc232bd380
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	d56cfcefbc606       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   5409cff3726d0       busybox                                      default
	48a54b0c1962c       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      12 seconds ago      Running             coredns                   0                   d3223b9839595       coredns-7d764666f9-dmhdv                     kube-system
	6ba5b74acb9f9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   f37c2ee9cc3d5       storage-provisioner                          kube-system
	3ccba7f653216       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    23 seconds ago      Running             kindnet-cni               0                   e5a1f47edd7d2       kindnet-9zxtp                                kube-system
	99567aace2d53       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                      24 seconds ago      Running             kube-proxy                0                   1bcfa42a7125b       kube-proxy-j2dkd                             kube-system
	8e243011e3c10       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                      34 seconds ago      Running             kube-controller-manager   0                   3a22b2bc5a984       kube-controller-manager-embed-certs-422591   kube-system
	6397ae284e85e       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                      34 seconds ago      Running             kube-scheduler            0                   21d1a3360824e       kube-scheduler-embed-certs-422591            kube-system
	5c62afe5065f3       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                      34 seconds ago      Running             etcd                      0                   95bf6cef2ff05       etcd-embed-certs-422591                      kube-system
	583cc622bcc02       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                      34 seconds ago      Running             kube-apiserver            0                   036e4ce22efc0       kube-apiserver-embed-certs-422591            kube-system
	
	
	==> describe nodes <==
	Name:               embed-certs-422591
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-422591
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba
	                    minikube.k8s.io/name=embed-certs-422591
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_28T06_56_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Dec 2025 06:56:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-422591
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 28 Dec 2025 06:56:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 28 Dec 2025 06:56:35 +0000   Sun, 28 Dec 2025 06:56:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 28 Dec 2025 06:56:35 +0000   Sun, 28 Dec 2025 06:56:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 28 Dec 2025 06:56:35 +0000   Sun, 28 Dec 2025 06:56:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 28 Dec 2025 06:56:35 +0000   Sun, 28 Dec 2025 06:56:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-422591
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 493159aea3d8b8768b108b926950835d
	  System UUID:                8e5f32a2-4590-4e27-9bc4-b0131e49535f
	  Boot ID:                    e7a1d175-ccf2-4135-b9c7-3a9f70f4c4af
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-7d764666f9-dmhdv                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-embed-certs-422591                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-9zxtp                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-embed-certs-422591             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-embed-certs-422591    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-j2dkd                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-embed-certs-422591             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  27s   node-controller  Node embed-certs-422591 event: Registered Node embed-certs-422591 in Controller
	
	
	==> dmesg <==
	[Dec28 06:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001811] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000998] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.386099] i8042: Warning: Keylock active
	[  +0.010472] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.485785] block sda: the capability attribute has been deprecated.
	[  +0.082391] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024584] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.071522] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 06:56:36 up 39 min,  0 user,  load average: 3.35, 2.70, 1.73
	Linux embed-certs-422591 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 28 06:56:10 embed-certs-422591 kubelet[1305]: E1228 06:56:10.455372    1305 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 28 06:56:10 embed-certs-422591 kubelet[1305]: E1228 06:56:10.455394    1305 projected.go:196] Error preparing data for projected volume kube-api-access-cltkp for pod kube-system/kube-proxy-j2dkd: configmap "kube-root-ca.crt" not found
	Dec 28 06:56:10 embed-certs-422591 kubelet[1305]: E1228 06:56:10.455456    1305 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f64a0ce0-4a8f-4d7d-aac7-cce99ec2bdd8-kube-api-access-cltkp podName:f64a0ce0-4a8f-4d7d-aac7-cce99ec2bdd8 nodeName:}" failed. No retries permitted until 2025-12-28 06:56:10.955435775 +0000 UTC m=+5.954257263 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cltkp" (UniqueName: "kubernetes.io/projected/f64a0ce0-4a8f-4d7d-aac7-cce99ec2bdd8-kube-api-access-cltkp") pod "kube-proxy-j2dkd" (UID: "f64a0ce0-4a8f-4d7d-aac7-cce99ec2bdd8") : configmap "kube-root-ca.crt" not found
	Dec 28 06:56:10 embed-certs-422591 kubelet[1305]: E1228 06:56:10.531507    1305 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-embed-certs-422591" containerName="etcd"
	Dec 28 06:56:12 embed-certs-422591 kubelet[1305]: E1228 06:56:12.106014    1305 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-embed-certs-422591" containerName="kube-scheduler"
	Dec 28 06:56:12 embed-certs-422591 kubelet[1305]: I1228 06:56:12.151734    1305 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-j2dkd" podStartSLOduration=2.151694312 podStartE2EDuration="2.151694312s" podCreationTimestamp="2025-12-28 06:56:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-28 06:56:12.151650072 +0000 UTC m=+7.150471556" watchObservedRunningTime="2025-12-28 06:56:12.151694312 +0000 UTC m=+7.150515806"
	Dec 28 06:56:13 embed-certs-422591 kubelet[1305]: E1228 06:56:13.640644    1305 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-embed-certs-422591" containerName="kube-apiserver"
	Dec 28 06:56:13 embed-certs-422591 kubelet[1305]: I1228 06:56:13.652523    1305 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-9zxtp" podStartSLOduration=2.311052587 podStartE2EDuration="3.652504183s" podCreationTimestamp="2025-12-28 06:56:10 +0000 UTC" firstStartedPulling="2025-12-28 06:56:11.158278948 +0000 UTC m=+6.157100435" lastFinishedPulling="2025-12-28 06:56:12.499730547 +0000 UTC m=+7.498552031" observedRunningTime="2025-12-28 06:56:13.15314889 +0000 UTC m=+8.151970384" watchObservedRunningTime="2025-12-28 06:56:13.652504183 +0000 UTC m=+8.651325677"
	Dec 28 06:56:13 embed-certs-422591 kubelet[1305]: E1228 06:56:13.807422    1305 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-embed-certs-422591" containerName="kube-controller-manager"
	Dec 28 06:56:20 embed-certs-422591 kubelet[1305]: E1228 06:56:20.532466    1305 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-embed-certs-422591" containerName="etcd"
	Dec 28 06:56:22 embed-certs-422591 kubelet[1305]: E1228 06:56:22.109904    1305 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-embed-certs-422591" containerName="kube-scheduler"
	Dec 28 06:56:23 embed-certs-422591 kubelet[1305]: I1228 06:56:23.071655    1305 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 28 06:56:23 embed-certs-422591 kubelet[1305]: I1228 06:56:23.213814    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2lqp\" (UniqueName: \"kubernetes.io/projected/ac0163fe-8dd0-4650-a401-22a9a9310b5e-kube-api-access-n2lqp\") pod \"storage-provisioner\" (UID: \"ac0163fe-8dd0-4650-a401-22a9a9310b5e\") " pod="kube-system/storage-provisioner"
	Dec 28 06:56:23 embed-certs-422591 kubelet[1305]: I1228 06:56:23.213855    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtf45\" (UniqueName: \"kubernetes.io/projected/73a84260-cf19-47c9-a23e-616f99cb5f38-kube-api-access-vtf45\") pod \"coredns-7d764666f9-dmhdv\" (UID: \"73a84260-cf19-47c9-a23e-616f99cb5f38\") " pod="kube-system/coredns-7d764666f9-dmhdv"
	Dec 28 06:56:23 embed-certs-422591 kubelet[1305]: I1228 06:56:23.213874    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ac0163fe-8dd0-4650-a401-22a9a9310b5e-tmp\") pod \"storage-provisioner\" (UID: \"ac0163fe-8dd0-4650-a401-22a9a9310b5e\") " pod="kube-system/storage-provisioner"
	Dec 28 06:56:23 embed-certs-422591 kubelet[1305]: I1228 06:56:23.213896    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73a84260-cf19-47c9-a23e-616f99cb5f38-config-volume\") pod \"coredns-7d764666f9-dmhdv\" (UID: \"73a84260-cf19-47c9-a23e-616f99cb5f38\") " pod="kube-system/coredns-7d764666f9-dmhdv"
	Dec 28 06:56:23 embed-certs-422591 kubelet[1305]: E1228 06:56:23.649946    1305 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-embed-certs-422591" containerName="kube-apiserver"
	Dec 28 06:56:23 embed-certs-422591 kubelet[1305]: E1228 06:56:23.814392    1305 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-embed-certs-422591" containerName="kube-controller-manager"
	Dec 28 06:56:24 embed-certs-422591 kubelet[1305]: E1228 06:56:24.170104    1305 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-dmhdv" containerName="coredns"
	Dec 28 06:56:24 embed-certs-422591 kubelet[1305]: I1228 06:56:24.182022    1305 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.182004326 podStartE2EDuration="13.182004326s" podCreationTimestamp="2025-12-28 06:56:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-28 06:56:24.181920433 +0000 UTC m=+19.180741928" watchObservedRunningTime="2025-12-28 06:56:24.182004326 +0000 UTC m=+19.180825819"
	Dec 28 06:56:24 embed-certs-422591 kubelet[1305]: I1228 06:56:24.195126    1305 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-dmhdv" podStartSLOduration=14.195102693 podStartE2EDuration="14.195102693s" podCreationTimestamp="2025-12-28 06:56:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-28 06:56:24.194562924 +0000 UTC m=+19.193384464" watchObservedRunningTime="2025-12-28 06:56:24.195102693 +0000 UTC m=+19.193924188"
	Dec 28 06:56:25 embed-certs-422591 kubelet[1305]: E1228 06:56:25.172889    1305 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-dmhdv" containerName="coredns"
	Dec 28 06:56:26 embed-certs-422591 kubelet[1305]: E1228 06:56:26.176011    1305 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-dmhdv" containerName="coredns"
	Dec 28 06:56:26 embed-certs-422591 kubelet[1305]: I1228 06:56:26.432746    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvsvx\" (UniqueName: \"kubernetes.io/projected/b72d2a4e-49f9-4dfb-bdc6-5dce7700bbe3-kube-api-access-pvsvx\") pod \"busybox\" (UID: \"b72d2a4e-49f9-4dfb-bdc6-5dce7700bbe3\") " pod="default/busybox"
	Dec 28 06:56:34 embed-certs-422591 kubelet[1305]: E1228 06:56:34.471196    1305 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:54034->127.0.0.1:35879: write tcp 127.0.0.1:54034->127.0.0.1:35879: write: broken pipe
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1228 06:56:35.475830  256538 logs.go:279] Failed to list containers for "kube-apiserver": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:35Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:56:35.539080  256538 logs.go:279] Failed to list containers for "etcd": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:35Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:56:35.603001  256538 logs.go:279] Failed to list containers for "coredns": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:35Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:56:35.670270  256538 logs.go:279] Failed to list containers for "kube-scheduler": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:35Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:56:35.734726  256538 logs.go:279] Failed to list containers for "kube-proxy": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:35Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:56:35.800376  256538 logs.go:279] Failed to list containers for "kube-controller-manager": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:35Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:56:35.869944  256538 logs.go:279] Failed to list containers for "kindnet": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:35Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:56:35.932295  256538 logs.go:279] Failed to list containers for "storage-provisioner": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:35Z" level=error msg="open /run/runc: no such file or directory"

                                                
                                                
** /stderr **
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-422591 -n embed-certs-422591
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-422591 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-500581 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-500581 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (270.890766ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:37Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-500581 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-500581 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-500581 describe deploy/metrics-server -n kube-system: exit status 1 (57.343593ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-500581 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-500581
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-500581:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "da0ad7d174162d65c66e3ecaafa24d0b1252ec7bd2985277aa585d02014d05db",
	        "Created": "2025-12-28T06:55:57.058727966Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 249328,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-28T06:55:57.098563046Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8b8cccb9afb2a57c3d011fcf33e0403b1551aa7036e30b12a395646869801935",
	        "ResolvConfPath": "/var/lib/docker/containers/da0ad7d174162d65c66e3ecaafa24d0b1252ec7bd2985277aa585d02014d05db/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/da0ad7d174162d65c66e3ecaafa24d0b1252ec7bd2985277aa585d02014d05db/hostname",
	        "HostsPath": "/var/lib/docker/containers/da0ad7d174162d65c66e3ecaafa24d0b1252ec7bd2985277aa585d02014d05db/hosts",
	        "LogPath": "/var/lib/docker/containers/da0ad7d174162d65c66e3ecaafa24d0b1252ec7bd2985277aa585d02014d05db/da0ad7d174162d65c66e3ecaafa24d0b1252ec7bd2985277aa585d02014d05db-json.log",
	        "Name": "/default-k8s-diff-port-500581",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-500581:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-500581",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "da0ad7d174162d65c66e3ecaafa24d0b1252ec7bd2985277aa585d02014d05db",
	                "LowerDir": "/var/lib/docker/overlay2/85103bb99adaaea3a94cf4ab6a896e25cc4e5dc2ccbdb18ec5bbe340080a52e1-init/diff:/var/lib/docker/overlay2/69e554713d6cc3cb33e7ea5f93430536a8ca0db38320574d3719c26f00b2f62c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/85103bb99adaaea3a94cf4ab6a896e25cc4e5dc2ccbdb18ec5bbe340080a52e1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/85103bb99adaaea3a94cf4ab6a896e25cc4e5dc2ccbdb18ec5bbe340080a52e1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/85103bb99adaaea3a94cf4ab6a896e25cc4e5dc2ccbdb18ec5bbe340080a52e1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-500581",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-500581/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-500581",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-500581",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-500581",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "adff1f07b261af6c68711aee16ba277cb5e5d354caaff8188152cc3e1a0f04b9",
	            "SandboxKey": "/var/run/docker/netns/adff1f07b261",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-500581": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "561fd4603b1e0bc4629e98f37fdc1fd471ed3bacfee2a3df062fc13a3b58944e",
	                    "EndpointID": "52647c971121a4be5eda9305d6242424cdc24796b7956e9ed12b7a49102a3fc0",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "d2:5a:07:61:f4:0d",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-500581",
	                        "da0ad7d17416"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-500581 -n default-k8s-diff-port-500581
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-500581 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-500581 logs -n 25: (1.004074699s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p kubernetes-upgrade-450365                                                                                                                                                                                                                  │ kubernetes-upgrade-450365    │ jenkins │ v1.37.0 │ 28 Dec 25 06:54 UTC │ 28 Dec 25 06:54 UTC │
	│ start   │ -p old-k8s-version-694122 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:54 UTC │ 28 Dec 25 06:55 UTC │
	│ image   │ test-preload-785573 image pull ghcr.io/medyagh/image-mirrors/busybox:latest                                                                                                                                                                   │ test-preload-785573          │ jenkins │ v1.37.0 │ 28 Dec 25 06:54 UTC │ 28 Dec 25 06:54 UTC │
	│ stop    │ -p test-preload-785573                                                                                                                                                                                                                        │ test-preload-785573          │ jenkins │ v1.37.0 │ 28 Dec 25 06:54 UTC │ 28 Dec 25 06:54 UTC │
	│ start   │ -p cert-expiration-623987 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-623987       │ jenkins │ v1.37.0 │ 28 Dec 25 06:54 UTC │ 28 Dec 25 06:54 UTC │
	│ start   │ -p test-preload-785573 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio                                                                                                                            │ test-preload-785573          │ jenkins │ v1.37.0 │ 28 Dec 25 06:54 UTC │ 28 Dec 25 06:55 UTC │
	│ delete  │ -p cert-expiration-623987                                                                                                                                                                                                                     │ cert-expiration-623987       │ jenkins │ v1.37.0 │ 28 Dec 25 06:54 UTC │ 28 Dec 25 06:54 UTC │
	│ start   │ -p no-preload-950460 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-694122 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │                     │
	│ stop    │ -p old-k8s-version-694122 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-694122 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ start   │ -p old-k8s-version-694122 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:56 UTC │
	│ image   │ test-preload-785573 image list                                                                                                                                                                                                                │ test-preload-785573          │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ delete  │ -p test-preload-785573                                                                                                                                                                                                                        │ test-preload-785573          │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ start   │ -p embed-certs-422591 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:56 UTC │
	│ delete  │ -p stopped-upgrade-416029                                                                                                                                                                                                                     │ stopped-upgrade-416029       │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ delete  │ -p disable-driver-mounts-719168                                                                                                                                                                                                               │ disable-driver-mounts-719168 │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ start   │ -p default-k8s-diff-port-500581 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:56 UTC │
	│ addons  │ enable metrics-server -p no-preload-950460 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │                     │
	│ stop    │ -p no-preload-950460 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:56 UTC │
	│ addons  │ enable dashboard -p no-preload-950460 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ start   │ -p no-preload-950460 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-422591 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	│ stop    │ -p embed-certs-422591 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-500581 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 06:56:09
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 06:56:09.683208  252331 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:56:09.683522  252331 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:56:09.683533  252331 out.go:374] Setting ErrFile to fd 2...
	I1228 06:56:09.683539  252331 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:56:09.683817  252331 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:56:09.684408  252331 out.go:368] Setting JSON to false
	I1228 06:56:09.686138  252331 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2322,"bootTime":1766902648,"procs":376,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1228 06:56:09.686216  252331 start.go:143] virtualization: kvm guest
	I1228 06:56:09.688379  252331 out.go:179] * [no-preload-950460] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1228 06:56:09.689966  252331 notify.go:221] Checking for updates...
	I1228 06:56:09.690624  252331 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 06:56:09.691759  252331 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 06:56:09.693287  252331 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:56:09.694542  252331 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-5550/.minikube
	I1228 06:56:09.696489  252331 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1228 06:56:09.698353  252331 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 06:56:09.700204  252331 config.go:182] Loaded profile config "no-preload-950460": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:56:09.700981  252331 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 06:56:09.731534  252331 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1228 06:56:09.731673  252331 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:56:09.809872  252331 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-28 06:56:09.797345649 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:56:09.810012  252331 docker.go:319] overlay module found
	I1228 06:56:09.811872  252331 out.go:179] * Using the docker driver based on existing profile
	I1228 06:56:09.813113  252331 start.go:309] selected driver: docker
	I1228 06:56:09.813141  252331 start.go:928] validating driver "docker" against &{Name:no-preload-950460 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-950460 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:56:09.813261  252331 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 06:56:09.814183  252331 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:56:09.889225  252331 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-28 06:56:09.87743098 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:56:09.889583  252331 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 06:56:09.889616  252331 cni.go:84] Creating CNI manager for ""
	I1228 06:56:09.889688  252331 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1228 06:56:09.889728  252331 start.go:353] cluster config:
	{Name:no-preload-950460 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-950460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:56:09.892491  252331 out.go:179] * Starting "no-preload-950460" primary control-plane node in "no-preload-950460" cluster
	I1228 06:56:09.893559  252331 cache.go:134] Beginning downloading kic base image for docker with crio
	I1228 06:56:09.895822  252331 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 06:56:09.897246  252331 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1228 06:56:09.897378  252331 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/config.json ...
	I1228 06:56:09.897665  252331 cache.go:107] acquiring lock: {Name:mkd9176dc8bfe34090aff279f6f101ea6f0af9cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:56:09.897748  252331 cache.go:115] /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1228 06:56:09.897763  252331 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 110.737µs
	I1228 06:56:09.897776  252331 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1228 06:56:09.897792  252331 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 06:56:09.897921  252331 cache.go:107] acquiring lock: {Name:mk7d35a6d2b389149dcbeab5c7c2ffb31f57d65c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:56:09.898003  252331 cache.go:115] /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0 exists
	I1228 06:56:09.898018  252331 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0" -> "/home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0" took 105.145µs
	I1228 06:56:09.898051  252331 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0 -> /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0 succeeded
	I1228 06:56:09.898068  252331 cache.go:107] acquiring lock: {Name:mk242447cc3bf85a80c449b21152ddfbb942621c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:56:09.898065  252331 cache.go:107] acquiring lock: {Name:mke2c1949855d4a55e5668b0d2ae93b37c482c30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:56:09.898080  252331 cache.go:107] acquiring lock: {Name:mk532de4689e044277857a73866e5969a2e4fbc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:56:09.898114  252331 cache.go:115] /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0 exists
	I1228 06:56:09.898091  252331 cache.go:107] acquiring lock: {Name:mke47ac9c7c044600bef8f6b93ef0e26dc8302f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:56:09.898122  252331 cache.go:115] /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0 exists
	I1228 06:56:09.898122  252331 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0" -> "/home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0" took 56.777µs
	I1228 06:56:09.898131  252331 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0 -> /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0 succeeded
	I1228 06:56:09.898131  252331 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0" -> "/home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0" took 104.803µs
	I1228 06:56:09.898140  252331 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0 -> /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0 succeeded
	I1228 06:56:09.898147  252331 cache.go:107] acquiring lock: {Name:mk9e59e568752d1ca479b7f88a0993095cc4ab42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:56:09.898154  252331 cache.go:107] acquiring lock: {Name:mk4a1a601fb4bce5015f4152fc8c90f967d969a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:56:09.898175  252331 cache.go:115] /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 exists
	I1228 06:56:09.898185  252331 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0" took 104.327µs
	I1228 06:56:09.898197  252331 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1228 06:56:09.898201  252331 cache.go:115] /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0 exists
	I1228 06:56:09.898209  252331 cache.go:115] /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1228 06:56:09.898214  252331 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0" -> "/home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0" took 145.471µs
	I1228 06:56:09.898217  252331 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 65.787µs
	I1228 06:56:09.898225  252331 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0 -> /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0 succeeded
	I1228 06:56:09.898228  252331 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1228 06:56:09.898247  252331 cache.go:115] /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1228 06:56:09.898255  252331 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 110.483µs
	I1228 06:56:09.898263  252331 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1228 06:56:09.898271  252331 cache.go:87] Successfully saved all images to host disk.
	I1228 06:56:09.925389  252331 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 06:56:09.925420  252331 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
	I1228 06:56:09.925442  252331 cache.go:243] Successfully downloaded all kic artifacts
	I1228 06:56:09.925482  252331 start.go:360] acquireMachinesLock for no-preload-950460: {Name:mk62d7b73784bafca52412532a69147c30805a01 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:56:09.925562  252331 start.go:364] duration metric: took 47.499µs to acquireMachinesLock for "no-preload-950460"
	I1228 06:56:09.925594  252331 start.go:96] Skipping create...Using existing machine configuration
	I1228 06:56:09.925604  252331 fix.go:54] fixHost starting: 
	I1228 06:56:09.925883  252331 cli_runner.go:164] Run: docker container inspect no-preload-950460 --format={{.State.Status}}
	I1228 06:56:09.947427  252331 fix.go:112] recreateIfNeeded on no-preload-950460: state=Stopped err=<nil>
	W1228 06:56:09.947470  252331 fix.go:138] unexpected machine state, will restart: <nil>
	I1228 06:56:09.244143  243963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:09.744639  243963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:10.244325  243963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:10.364365  243963 kubeadm.go:1114] duration metric: took 4.219411016s to wait for elevateKubeSystemPrivileges
	I1228 06:56:10.364473  243963 kubeadm.go:403] duration metric: took 12.104828541s to StartCluster
	I1228 06:56:10.364513  243963 settings.go:142] acquiring lock: {Name:mk84c1d4c127eaf11c7cc5cc16de86f86f2dcbe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:10.364574  243963 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:56:10.367334  243963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/kubeconfig: {Name:mke0b45a06b272ee5aa493b62cdbe6f2a53c0aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:10.367689  243963 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1228 06:56:10.368151  243963 config.go:182] Loaded profile config "embed-certs-422591": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:56:10.368391  243963 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1228 06:56:10.368490  243963 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-422591"
	I1228 06:56:10.368509  243963 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-422591"
	I1228 06:56:10.368558  243963 host.go:66] Checking if "embed-certs-422591" exists ...
	I1228 06:56:10.369000  243963 cli_runner.go:164] Run: docker container inspect embed-certs-422591 --format={{.State.Status}}
	I1228 06:56:10.369135  243963 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1228 06:56:10.369221  243963 addons.go:70] Setting default-storageclass=true in profile "embed-certs-422591"
	I1228 06:56:10.369280  243963 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-422591"
	I1228 06:56:10.369857  243963 cli_runner.go:164] Run: docker container inspect embed-certs-422591 --format={{.State.Status}}
	I1228 06:56:10.370623  243963 out.go:179] * Verifying Kubernetes components...
	I1228 06:56:10.374484  243963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:56:10.403086  243963 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1228 06:56:07.752961  242715 pod_ready.go:104] pod "coredns-5dd5756b68-f75js" is not "Ready", error: <nil>
	W1228 06:56:09.756311  242715 pod_ready.go:104] pod "coredns-5dd5756b68-f75js" is not "Ready", error: <nil>
	I1228 06:56:10.405267  243963 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:56:10.405293  243963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1228 06:56:10.405355  243963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422591
	I1228 06:56:10.407121  243963 addons.go:239] Setting addon default-storageclass=true in "embed-certs-422591"
	I1228 06:56:10.407166  243963 host.go:66] Checking if "embed-certs-422591" exists ...
	I1228 06:56:10.408137  243963 cli_runner.go:164] Run: docker container inspect embed-certs-422591 --format={{.State.Status}}
	I1228 06:56:10.438924  243963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/embed-certs-422591/id_rsa Username:docker}
	I1228 06:56:10.442747  243963 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1228 06:56:10.442772  243963 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1228 06:56:10.442827  243963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422591
	I1228 06:56:10.477359  243963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/embed-certs-422591/id_rsa Username:docker}
	I1228 06:56:10.532358  243963 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1228 06:56:10.573979  243963 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:56:10.588218  243963 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:56:10.648019  243963 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1228 06:56:10.867869  243963 start.go:987] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1228 06:56:11.085832  243963 node_ready.go:35] waiting up to 6m0s for node "embed-certs-422591" to be "Ready" ...
	I1228 06:56:11.095783  243963 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1228 06:56:09.058672  247213 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1228 06:56:09.063442  247213 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1228 06:56:09.063466  247213 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1228 06:56:09.077870  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1228 06:56:09.407176  247213 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1228 06:56:09.407367  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:09.407468  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-500581 minikube.k8s.io/updated_at=2025_12_28T06_56_09_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba minikube.k8s.io/name=default-k8s-diff-port-500581 minikube.k8s.io/primary=true
	I1228 06:56:09.580457  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:09.580543  247213 ops.go:34] apiserver oom_adj: -16
	I1228 06:56:10.080579  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:10.581243  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:11.080638  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:11.581312  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:12.080705  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:12.580620  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:13.081161  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:13.581441  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:13.652690  247213 kubeadm.go:1114] duration metric: took 4.245373726s to wait for elevateKubeSystemPrivileges
	I1228 06:56:13.652726  247213 kubeadm.go:403] duration metric: took 12.364737655s to StartCluster
	I1228 06:56:13.652748  247213 settings.go:142] acquiring lock: {Name:mk84c1d4c127eaf11c7cc5cc16de86f86f2dcbe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:13.652812  247213 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:56:13.654909  247213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/kubeconfig: {Name:mke0b45a06b272ee5aa493b62cdbe6f2a53c0aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:13.655206  247213 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1228 06:56:13.655359  247213 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1228 06:56:13.655613  247213 config.go:182] Loaded profile config "default-k8s-diff-port-500581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:56:13.655657  247213 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1228 06:56:13.655720  247213 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-500581"
	I1228 06:56:13.655737  247213 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-500581"
	I1228 06:56:13.655761  247213 host.go:66] Checking if "default-k8s-diff-port-500581" exists ...
	I1228 06:56:13.656261  247213 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-500581"
	I1228 06:56:13.656283  247213 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-500581"
	I1228 06:56:13.656613  247213 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-500581 --format={{.State.Status}}
	I1228 06:56:13.657602  247213 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-500581 --format={{.State.Status}}
	I1228 06:56:13.660155  247213 out.go:179] * Verifying Kubernetes components...
	I1228 06:56:13.661579  247213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:56:13.684520  247213 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1228 06:56:11.097178  243963 addons.go:530] duration metric: took 728.781424ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1228 06:56:11.372202  243963 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-422591" context rescaled to 1 replicas
	W1228 06:56:13.088569  243963 node_ready.go:57] node "embed-certs-422591" has "Ready":"False" status (will retry)
	I1228 06:56:13.685585  247213 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:56:13.685607  247213 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1228 06:56:13.685662  247213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-500581
	I1228 06:56:13.686151  247213 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-500581"
	I1228 06:56:13.686203  247213 host.go:66] Checking if "default-k8s-diff-port-500581" exists ...
	I1228 06:56:13.686699  247213 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-500581 --format={{.State.Status}}
	I1228 06:56:13.718321  247213 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1228 06:56:13.718423  247213 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1228 06:56:13.718565  247213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-500581
	I1228 06:56:13.728024  247213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/default-k8s-diff-port-500581/id_rsa Username:docker}
	I1228 06:56:13.751115  247213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/default-k8s-diff-port-500581/id_rsa Username:docker}
	I1228 06:56:13.767540  247213 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1228 06:56:13.826652  247213 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:56:13.845102  247213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:56:13.860783  247213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1228 06:56:13.971728  247213 start.go:987] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1228 06:56:13.973616  247213 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-500581" to be "Ready" ...
	I1228 06:56:14.185139  247213 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1228 06:56:09.949330  252331 out.go:252] * Restarting existing docker container for "no-preload-950460" ...
	I1228 06:56:09.949409  252331 cli_runner.go:164] Run: docker start no-preload-950460
	I1228 06:56:10.304369  252331 cli_runner.go:164] Run: docker container inspect no-preload-950460 --format={{.State.Status}}
	I1228 06:56:10.333247  252331 kic.go:430] container "no-preload-950460" state is running.
	I1228 06:56:10.333791  252331 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-950460
	I1228 06:56:10.362343  252331 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/config.json ...
	I1228 06:56:10.362749  252331 machine.go:94] provisionDockerMachine start ...
	I1228 06:56:10.362898  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:10.399401  252331 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:10.400763  252331 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1228 06:56:10.400782  252331 main.go:144] libmachine: About to run SSH command:
	hostname
	I1228 06:56:10.401698  252331 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42784->127.0.0.1:33078: read: connection reset by peer
	I1228 06:56:13.530578  252331 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-950460
	
	I1228 06:56:13.530607  252331 ubuntu.go:182] provisioning hostname "no-preload-950460"
	I1228 06:56:13.530671  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:13.551523  252331 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:13.551766  252331 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1228 06:56:13.551782  252331 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-950460 && echo "no-preload-950460" | sudo tee /etc/hostname
	I1228 06:56:13.697078  252331 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-950460
	
	I1228 06:56:13.697213  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:13.734170  252331 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:13.734651  252331 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1228 06:56:13.734718  252331 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-950460' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-950460/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-950460' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1228 06:56:13.876570  252331 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1228 06:56:13.876646  252331 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22352-5550/.minikube CaCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22352-5550/.minikube}
	I1228 06:56:13.878995  252331 ubuntu.go:190] setting up certificates
	I1228 06:56:13.879017  252331 provision.go:84] configureAuth start
	I1228 06:56:13.879096  252331 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-950460
	I1228 06:56:13.902076  252331 provision.go:143] copyHostCerts
	I1228 06:56:13.902141  252331 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem, removing ...
	I1228 06:56:13.902162  252331 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem
	I1228 06:56:13.902253  252331 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem (1082 bytes)
	I1228 06:56:13.902388  252331 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem, removing ...
	I1228 06:56:13.902401  252331 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem
	I1228 06:56:13.902438  252331 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem (1123 bytes)
	I1228 06:56:13.902511  252331 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem, removing ...
	I1228 06:56:13.902520  252331 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem
	I1228 06:56:13.902560  252331 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem (1679 bytes)
	I1228 06:56:13.902624  252331 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem org=jenkins.no-preload-950460 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-950460]
	I1228 06:56:14.048352  252331 provision.go:177] copyRemoteCerts
	I1228 06:56:14.048419  252331 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1228 06:56:14.048452  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:14.068611  252331 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/no-preload-950460/id_rsa Username:docker}
	I1228 06:56:14.168261  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1228 06:56:14.190018  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1228 06:56:14.208765  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1228 06:56:14.226610  252331 provision.go:87] duration metric: took 347.581995ms to configureAuth
	I1228 06:56:14.226635  252331 ubuntu.go:206] setting minikube options for container-runtime
	I1228 06:56:14.226812  252331 config.go:182] Loaded profile config "no-preload-950460": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:56:14.226900  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:14.244598  252331 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:14.244866  252331 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1228 06:56:14.244892  252331 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1228 06:56:12.253209  242715 pod_ready.go:104] pod "coredns-5dd5756b68-f75js" is not "Ready", error: <nil>
	W1228 06:56:14.796990  242715 pod_ready.go:104] pod "coredns-5dd5756b68-f75js" is not "Ready", error: <nil>
	I1228 06:56:15.100866  252331 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1228 06:56:15.100892  252331 machine.go:97] duration metric: took 4.738124144s to provisionDockerMachine
	I1228 06:56:15.100904  252331 start.go:293] postStartSetup for "no-preload-950460" (driver="docker")
	I1228 06:56:15.100918  252331 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1228 06:56:15.101012  252331 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1228 06:56:15.101073  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:15.125860  252331 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/no-preload-950460/id_rsa Username:docker}
	I1228 06:56:15.230154  252331 ssh_runner.go:195] Run: cat /etc/os-release
	I1228 06:56:15.234858  252331 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1228 06:56:15.234891  252331 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1228 06:56:15.234905  252331 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-5550/.minikube/addons for local assets ...
	I1228 06:56:15.234956  252331 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-5550/.minikube/files for local assets ...
	I1228 06:56:15.235108  252331 filesync.go:149] local asset: /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem -> 90762.pem in /etc/ssl/certs
	I1228 06:56:15.235252  252331 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1228 06:56:15.245155  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem --> /etc/ssl/certs/90762.pem (1708 bytes)
	I1228 06:56:15.268602  252331 start.go:296] duration metric: took 167.682246ms for postStartSetup
	I1228 06:56:15.268700  252331 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 06:56:15.268759  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:15.288607  252331 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/no-preload-950460/id_rsa Username:docker}
	I1228 06:56:15.381324  252331 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1228 06:56:15.386166  252331 fix.go:56] duration metric: took 5.460557205s for fixHost
	I1228 06:56:15.386193  252331 start.go:83] releasing machines lock for "no-preload-950460", held for 5.460617152s
	I1228 06:56:15.386267  252331 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-950460
	I1228 06:56:15.405738  252331 ssh_runner.go:195] Run: cat /version.json
	I1228 06:56:15.405806  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:15.405845  252331 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1228 06:56:15.405936  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:15.426086  252331 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/no-preload-950460/id_rsa Username:docker}
	I1228 06:56:15.426572  252331 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/no-preload-950460/id_rsa Username:docker}
	I1228 06:56:15.573340  252331 ssh_runner.go:195] Run: systemctl --version
	I1228 06:56:15.580022  252331 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1228 06:56:15.614860  252331 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1228 06:56:15.619799  252331 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1228 06:56:15.619859  252331 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1228 06:56:15.627841  252331 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1228 06:56:15.627863  252331 start.go:496] detecting cgroup driver to use...
	I1228 06:56:15.627897  252331 detect.go:190] detected "systemd" cgroup driver on host os
	I1228 06:56:15.627935  252331 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1228 06:56:15.643627  252331 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1228 06:56:15.656486  252331 docker.go:218] disabling cri-docker service (if available) ...
	I1228 06:56:15.656542  252331 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1228 06:56:15.670796  252331 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1228 06:56:15.683099  252331 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1228 06:56:15.763732  252331 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1228 06:56:15.846193  252331 docker.go:234] disabling docker service ...
	I1228 06:56:15.846248  252331 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1228 06:56:15.860365  252331 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1228 06:56:15.872316  252331 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1228 06:56:15.952498  252331 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1228 06:56:16.036768  252331 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1228 06:56:16.048883  252331 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 06:56:16.062667  252331 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1228 06:56:16.062719  252331 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:16.072039  252331 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1228 06:56:16.072100  252331 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:16.080521  252331 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:16.089148  252331 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:16.097405  252331 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1228 06:56:16.105158  252331 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:16.113413  252331 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:16.122659  252331 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:16.131327  252331 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1228 06:56:16.138849  252331 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1228 06:56:16.145687  252331 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:56:16.222679  252331 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1228 06:56:16.520445  252331 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1228 06:56:16.520595  252331 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1228 06:56:16.524711  252331 start.go:574] Will wait 60s for crictl version
	I1228 06:56:16.524766  252331 ssh_runner.go:195] Run: which crictl
	I1228 06:56:16.528189  252331 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1228 06:56:16.553043  252331 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1228 06:56:16.553151  252331 ssh_runner.go:195] Run: crio --version
	I1228 06:56:16.580248  252331 ssh_runner.go:195] Run: crio --version
	I1228 06:56:16.608534  252331 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1228 06:56:14.186403  247213 addons.go:530] duration metric: took 530.739381ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1228 06:56:14.479845  247213 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-500581" context rescaled to 1 replicas
	W1228 06:56:15.976454  247213 node_ready.go:57] node "default-k8s-diff-port-500581" has "Ready":"False" status (will retry)
	I1228 06:56:16.609592  252331 cli_runner.go:164] Run: docker network inspect no-preload-950460 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 06:56:16.626775  252331 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1228 06:56:16.630900  252331 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 06:56:16.641409  252331 kubeadm.go:884] updating cluster {Name:no-preload-950460 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-950460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 06:56:16.641518  252331 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1228 06:56:16.641556  252331 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 06:56:16.675102  252331 crio.go:631] all images are preloaded for cri-o runtime.
	I1228 06:56:16.675123  252331 cache_images.go:86] Images are preloaded, skipping loading
	I1228 06:56:16.675129  252331 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0 crio true true} ...
	I1228 06:56:16.675244  252331 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-950460 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:no-preload-950460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 06:56:16.675331  252331 ssh_runner.go:195] Run: crio config
	I1228 06:56:16.718702  252331 cni.go:84] Creating CNI manager for ""
	I1228 06:56:16.718733  252331 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1228 06:56:16.718752  252331 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1228 06:56:16.718789  252331 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-950460 NodeName:no-preload-950460 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 06:56:16.718988  252331 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-950460"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 06:56:16.719070  252331 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1228 06:56:16.727836  252331 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 06:56:16.727925  252331 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 06:56:16.735688  252331 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1228 06:56:16.748533  252331 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 06:56:16.761180  252331 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1228 06:56:16.774346  252331 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1228 06:56:16.777963  252331 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 06:56:16.787778  252331 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:56:16.870258  252331 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:56:16.897229  252331 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460 for IP: 192.168.94.2
	I1228 06:56:16.897252  252331 certs.go:195] generating shared ca certs ...
	I1228 06:56:16.897273  252331 certs.go:227] acquiring lock for ca certs: {Name:mk77ee411d20e2d367f536371cb4debf1ce5f664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:16.897417  252331 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-5550/.minikube/ca.key
	I1228 06:56:16.897469  252331 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.key
	I1228 06:56:16.897483  252331 certs.go:257] generating profile certs ...
	I1228 06:56:16.897565  252331 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/client.key
	I1228 06:56:16.897621  252331 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/apiserver.key.3468f947
	I1228 06:56:16.897659  252331 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/proxy-client.key
	I1228 06:56:16.897752  252331 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076.pem (1338 bytes)
	W1228 06:56:16.897786  252331 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076_empty.pem, impossibly tiny 0 bytes
	I1228 06:56:16.897800  252331 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem (1675 bytes)
	I1228 06:56:16.897832  252331 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem (1082 bytes)
	I1228 06:56:16.897861  252331 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem (1123 bytes)
	I1228 06:56:16.897894  252331 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem (1679 bytes)
	I1228 06:56:16.897943  252331 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem (1708 bytes)
	I1228 06:56:16.898713  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 06:56:16.917010  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1228 06:56:16.936367  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 06:56:16.957237  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 06:56:16.980495  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1228 06:56:16.998372  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1228 06:56:17.015059  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 06:56:17.031891  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1228 06:56:17.049280  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem --> /usr/share/ca-certificates/90762.pem (1708 bytes)
	I1228 06:56:17.065663  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 06:56:17.082832  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076.pem --> /usr/share/ca-certificates/9076.pem (1338 bytes)
	I1228 06:56:17.100902  252331 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 06:56:17.113166  252331 ssh_runner.go:195] Run: openssl version
	I1228 06:56:17.119103  252331 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:17.126689  252331 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 06:56:17.134233  252331 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:17.137970  252331 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:28 /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:17.138010  252331 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:17.174376  252331 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 06:56:17.182094  252331 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9076.pem
	I1228 06:56:17.189546  252331 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9076.pem /etc/ssl/certs/9076.pem
	I1228 06:56:17.196673  252331 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9076.pem
	I1228 06:56:17.200312  252331 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:31 /usr/share/ca-certificates/9076.pem
	I1228 06:56:17.200355  252331 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9076.pem
	I1228 06:56:17.235404  252331 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 06:56:17.243056  252331 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/90762.pem
	I1228 06:56:17.251423  252331 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/90762.pem /etc/ssl/certs/90762.pem
	I1228 06:56:17.259118  252331 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/90762.pem
	I1228 06:56:17.262689  252331 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:31 /usr/share/ca-certificates/90762.pem
	I1228 06:56:17.262740  252331 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/90762.pem
	I1228 06:56:17.298353  252331 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 06:56:17.306420  252331 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 06:56:17.310366  252331 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1228 06:56:17.344608  252331 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1228 06:56:17.380698  252331 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1228 06:56:17.426014  252331 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1228 06:56:17.474223  252331 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1228 06:56:17.531854  252331 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1228 06:56:17.577281  252331 kubeadm.go:401] StartCluster: {Name:no-preload-950460 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-950460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:56:17.577434  252331 ssh_runner.go:195] Run: sudo crio config
	I1228 06:56:17.636151  252331 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	W1228 06:56:17.648977  252331 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:17Z" level=error msg="open /run/runc: no such file or directory"
	I1228 06:56:17.649067  252331 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 06:56:17.657728  252331 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1228 06:56:17.657748  252331 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1228 06:56:17.657796  252331 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1228 06:56:17.666778  252331 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1228 06:56:17.668081  252331 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-950460" does not appear in /home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:56:17.668996  252331 kubeconfig.go:62] /home/jenkins/minikube-integration/22352-5550/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-950460" cluster setting kubeconfig missing "no-preload-950460" context setting]
	I1228 06:56:17.670453  252331 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/kubeconfig: {Name:mke0b45a06b272ee5aa493b62cdbe6f2a53c0aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:17.672683  252331 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1228 06:56:17.683544  252331 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1228 06:56:17.683585  252331 kubeadm.go:602] duration metric: took 25.829752ms to restartPrimaryControlPlane
	I1228 06:56:17.683596  252331 kubeadm.go:403] duration metric: took 106.327386ms to StartCluster
	I1228 06:56:17.683615  252331 settings.go:142] acquiring lock: {Name:mk84c1d4c127eaf11c7cc5cc16de86f86f2dcbe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:17.683665  252331 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:56:17.686260  252331 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/kubeconfig: {Name:mke0b45a06b272ee5aa493b62cdbe6f2a53c0aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:17.686556  252331 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1228 06:56:17.686676  252331 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1228 06:56:17.686779  252331 addons.go:70] Setting storage-provisioner=true in profile "no-preload-950460"
	I1228 06:56:17.686790  252331 config.go:182] Loaded profile config "no-preload-950460": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:56:17.686794  252331 addons.go:239] Setting addon storage-provisioner=true in "no-preload-950460"
	W1228 06:56:17.686802  252331 addons.go:248] addon storage-provisioner should already be in state true
	I1228 06:56:17.686829  252331 host.go:66] Checking if "no-preload-950460" exists ...
	I1228 06:56:17.686834  252331 addons.go:70] Setting default-storageclass=true in profile "no-preload-950460"
	I1228 06:56:17.686838  252331 addons.go:70] Setting dashboard=true in profile "no-preload-950460"
	I1228 06:56:17.686865  252331 addons.go:239] Setting addon dashboard=true in "no-preload-950460"
	W1228 06:56:17.686879  252331 addons.go:248] addon dashboard should already be in state true
	I1228 06:56:17.686912  252331 host.go:66] Checking if "no-preload-950460" exists ...
	I1228 06:56:17.686847  252331 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-950460"
	I1228 06:56:17.687329  252331 cli_runner.go:164] Run: docker container inspect no-preload-950460 --format={{.State.Status}}
	I1228 06:56:17.687415  252331 cli_runner.go:164] Run: docker container inspect no-preload-950460 --format={{.State.Status}}
	I1228 06:56:17.687330  252331 cli_runner.go:164] Run: docker container inspect no-preload-950460 --format={{.State.Status}}
	I1228 06:56:17.689184  252331 out.go:179] * Verifying Kubernetes components...
	I1228 06:56:17.690310  252331 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:56:17.712805  252331 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1228 06:56:17.713229  252331 addons.go:239] Setting addon default-storageclass=true in "no-preload-950460"
	W1228 06:56:17.713248  252331 addons.go:248] addon default-storageclass should already be in state true
	I1228 06:56:17.713270  252331 host.go:66] Checking if "no-preload-950460" exists ...
	I1228 06:56:17.713562  252331 cli_runner.go:164] Run: docker container inspect no-preload-950460 --format={{.State.Status}}
	I1228 06:56:17.713731  252331 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1228 06:56:17.713774  252331 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:56:17.713791  252331 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1228 06:56:17.713835  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:17.715782  252331 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1228 06:56:15.089728  243963 node_ready.go:57] node "embed-certs-422591" has "Ready":"False" status (will retry)
	W1228 06:56:17.589238  243963 node_ready.go:57] node "embed-certs-422591" has "Ready":"False" status (will retry)
	I1228 06:56:17.716776  252331 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1228 06:56:17.716793  252331 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1228 06:56:17.716846  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:17.737306  252331 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1228 06:56:17.737329  252331 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1228 06:56:17.737387  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:17.747296  252331 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/no-preload-950460/id_rsa Username:docker}
	I1228 06:56:17.752550  252331 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/no-preload-950460/id_rsa Username:docker}
	I1228 06:56:17.763145  252331 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/no-preload-950460/id_rsa Username:docker}
	I1228 06:56:17.827637  252331 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:56:17.841176  252331 node_ready.go:35] waiting up to 6m0s for node "no-preload-950460" to be "Ready" ...
	I1228 06:56:17.852679  252331 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:56:17.859387  252331 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1228 06:56:17.859413  252331 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1228 06:56:17.870358  252331 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1228 06:56:17.876579  252331 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1228 06:56:17.876626  252331 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1228 06:56:17.892110  252331 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1228 06:56:17.892137  252331 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1228 06:56:17.907110  252331 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1228 06:56:17.907153  252331 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1228 06:56:17.921175  252331 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1228 06:56:17.921199  252331 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1228 06:56:17.934592  252331 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1228 06:56:17.934610  252331 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1228 06:56:17.946620  252331 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1228 06:56:17.946645  252331 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1228 06:56:17.958616  252331 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1228 06:56:17.958637  252331 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1228 06:56:17.971511  252331 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 06:56:17.971531  252331 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1228 06:56:17.984466  252331 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 06:56:19.111197  252331 node_ready.go:49] node "no-preload-950460" is "Ready"
	I1228 06:56:19.111234  252331 node_ready.go:38] duration metric: took 1.270013468s for node "no-preload-950460" to be "Ready" ...
	I1228 06:56:19.111250  252331 api_server.go:52] waiting for apiserver process to appear ...
	I1228 06:56:19.111303  252331 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 06:56:19.644061  252331 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.791326834s)
	I1228 06:56:19.644127  252331 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.773734972s)
	I1228 06:56:19.644217  252331 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.659719643s)
	I1228 06:56:19.644238  252331 api_server.go:72] duration metric: took 1.957648252s to wait for apiserver process to appear ...
	I1228 06:56:19.644247  252331 api_server.go:88] waiting for apiserver healthz status ...
	I1228 06:56:19.644265  252331 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1228 06:56:19.646079  252331 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-950460 addons enable metrics-server
	
	I1228 06:56:19.648689  252331 api_server.go:325] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1228 06:56:19.648710  252331 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1228 06:56:19.652919  252331 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1228 06:56:19.654055  252331 addons.go:530] duration metric: took 1.967385599s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	W1228 06:56:17.252978  242715 pod_ready.go:104] pod "coredns-5dd5756b68-f75js" is not "Ready", error: <nil>
	W1228 06:56:19.752632  242715 pod_ready.go:104] pod "coredns-5dd5756b68-f75js" is not "Ready", error: <nil>
	W1228 06:56:17.976710  247213 node_ready.go:57] node "default-k8s-diff-port-500581" has "Ready":"False" status (will retry)
	W1228 06:56:20.476521  247213 node_ready.go:57] node "default-k8s-diff-port-500581" has "Ready":"False" status (will retry)
	W1228 06:56:20.089066  243963 node_ready.go:57] node "embed-certs-422591" has "Ready":"False" status (will retry)
	W1228 06:56:22.089199  243963 node_ready.go:57] node "embed-certs-422591" has "Ready":"False" status (will retry)
	I1228 06:56:23.089137  243963 node_ready.go:49] node "embed-certs-422591" is "Ready"
	I1228 06:56:23.089171  243963 node_ready.go:38] duration metric: took 12.00330569s for node "embed-certs-422591" to be "Ready" ...
	I1228 06:56:23.089188  243963 api_server.go:52] waiting for apiserver process to appear ...
	I1228 06:56:23.089247  243963 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 06:56:23.109640  243963 api_server.go:72] duration metric: took 12.740459175s to wait for apiserver process to appear ...
	I1228 06:56:23.109670  243963 api_server.go:88] waiting for apiserver healthz status ...
	I1228 06:56:23.109691  243963 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1228 06:56:23.115347  243963 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1228 06:56:23.116388  243963 api_server.go:141] control plane version: v1.35.0
	I1228 06:56:23.116413  243963 api_server.go:131] duration metric: took 6.736322ms to wait for apiserver health ...
	I1228 06:56:23.116422  243963 system_pods.go:43] waiting for kube-system pods to appear ...
	I1228 06:56:23.120151  243963 system_pods.go:59] 8 kube-system pods found
	I1228 06:56:23.120183  243963 system_pods.go:61] "coredns-7d764666f9-dmhdv" [73a84260-cf19-47c9-a23e-616f99cb5f38] Pending
	I1228 06:56:23.120191  243963 system_pods.go:61] "etcd-embed-certs-422591" [fa26dd24-514f-4ada-b710-7e65e52ccd9e] Running
	I1228 06:56:23.120197  243963 system_pods.go:61] "kindnet-9zxtp" [e5bd678d-7d09-455f-993e-2fb6a4f02111] Running
	I1228 06:56:23.120217  243963 system_pods.go:61] "kube-apiserver-embed-certs-422591" [935ffe6b-7ba7-4580-b22b-b76cd5469ae8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 06:56:23.120229  243963 system_pods.go:61] "kube-controller-manager-embed-certs-422591" [39f6d823-b0f9-44c0-be26-47e85a60ca63] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 06:56:23.120236  243963 system_pods.go:61] "kube-proxy-j2dkd" [f64a0ce0-4a8f-4d7d-aac7-cce99ec2bdd8] Running
	I1228 06:56:23.120242  243963 system_pods.go:61] "kube-scheduler-embed-certs-422591" [ba6d5c13-ea17-4f1d-bcc9-c57e4e7ce995] Running
	I1228 06:56:23.120247  243963 system_pods.go:61] "storage-provisioner" [ac0163fe-8dd0-4650-a401-22a9a9310b5e] Pending
	I1228 06:56:23.120255  243963 system_pods.go:74] duration metric: took 3.827732ms to wait for pod list to return data ...
	I1228 06:56:23.120267  243963 default_sa.go:34] waiting for default service account to be created ...
	I1228 06:56:23.122455  243963 default_sa.go:45] found service account: "default"
	I1228 06:56:23.122484  243963 default_sa.go:55] duration metric: took 2.209324ms for default service account to be created ...
	I1228 06:56:23.122495  243963 system_pods.go:116] waiting for k8s-apps to be running ...
	I1228 06:56:23.125732  243963 system_pods.go:86] 8 kube-system pods found
	I1228 06:56:23.125761  243963 system_pods.go:89] "coredns-7d764666f9-dmhdv" [73a84260-cf19-47c9-a23e-616f99cb5f38] Pending
	I1228 06:56:23.125768  243963 system_pods.go:89] "etcd-embed-certs-422591" [fa26dd24-514f-4ada-b710-7e65e52ccd9e] Running
	I1228 06:56:23.125774  243963 system_pods.go:89] "kindnet-9zxtp" [e5bd678d-7d09-455f-993e-2fb6a4f02111] Running
	I1228 06:56:23.125782  243963 system_pods.go:89] "kube-apiserver-embed-certs-422591" [935ffe6b-7ba7-4580-b22b-b76cd5469ae8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 06:56:23.125798  243963 system_pods.go:89] "kube-controller-manager-embed-certs-422591" [39f6d823-b0f9-44c0-be26-47e85a60ca63] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 06:56:23.125806  243963 system_pods.go:89] "kube-proxy-j2dkd" [f64a0ce0-4a8f-4d7d-aac7-cce99ec2bdd8] Running
	I1228 06:56:23.125812  243963 system_pods.go:89] "kube-scheduler-embed-certs-422591" [ba6d5c13-ea17-4f1d-bcc9-c57e4e7ce995] Running
	I1228 06:56:23.125821  243963 system_pods.go:89] "storage-provisioner" [ac0163fe-8dd0-4650-a401-22a9a9310b5e] Pending
	I1228 06:56:23.125858  243963 retry.go:84] will retry after 300ms: missing components: kube-dns
	I1228 06:56:23.380969  243963 system_pods.go:86] 8 kube-system pods found
	I1228 06:56:23.381005  243963 system_pods.go:89] "coredns-7d764666f9-dmhdv" [73a84260-cf19-47c9-a23e-616f99cb5f38] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:56:23.381014  243963 system_pods.go:89] "etcd-embed-certs-422591" [fa26dd24-514f-4ada-b710-7e65e52ccd9e] Running
	I1228 06:56:23.381023  243963 system_pods.go:89] "kindnet-9zxtp" [e5bd678d-7d09-455f-993e-2fb6a4f02111] Running
	I1228 06:56:23.381042  243963 system_pods.go:89] "kube-apiserver-embed-certs-422591" [935ffe6b-7ba7-4580-b22b-b76cd5469ae8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 06:56:23.381051  243963 system_pods.go:89] "kube-controller-manager-embed-certs-422591" [39f6d823-b0f9-44c0-be26-47e85a60ca63] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 06:56:23.381057  243963 system_pods.go:89] "kube-proxy-j2dkd" [f64a0ce0-4a8f-4d7d-aac7-cce99ec2bdd8] Running
	I1228 06:56:23.381067  243963 system_pods.go:89] "kube-scheduler-embed-certs-422591" [ba6d5c13-ea17-4f1d-bcc9-c57e4e7ce995] Running
	I1228 06:56:23.381075  243963 system_pods.go:89] "storage-provisioner" [ac0163fe-8dd0-4650-a401-22a9a9310b5e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:56:23.736873  243963 system_pods.go:86] 8 kube-system pods found
	I1228 06:56:23.736924  243963 system_pods.go:89] "coredns-7d764666f9-dmhdv" [73a84260-cf19-47c9-a23e-616f99cb5f38] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:56:23.736933  243963 system_pods.go:89] "etcd-embed-certs-422591" [fa26dd24-514f-4ada-b710-7e65e52ccd9e] Running
	I1228 06:56:23.736942  243963 system_pods.go:89] "kindnet-9zxtp" [e5bd678d-7d09-455f-993e-2fb6a4f02111] Running
	I1228 06:56:23.736955  243963 system_pods.go:89] "kube-apiserver-embed-certs-422591" [935ffe6b-7ba7-4580-b22b-b76cd5469ae8] Running
	I1228 06:56:23.736965  243963 system_pods.go:89] "kube-controller-manager-embed-certs-422591" [39f6d823-b0f9-44c0-be26-47e85a60ca63] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 06:56:23.736971  243963 system_pods.go:89] "kube-proxy-j2dkd" [f64a0ce0-4a8f-4d7d-aac7-cce99ec2bdd8] Running
	I1228 06:56:23.736990  243963 system_pods.go:89] "kube-scheduler-embed-certs-422591" [ba6d5c13-ea17-4f1d-bcc9-c57e4e7ce995] Running
	I1228 06:56:23.737002  243963 system_pods.go:89] "storage-provisioner" [ac0163fe-8dd0-4650-a401-22a9a9310b5e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:56:24.078656  243963 system_pods.go:86] 8 kube-system pods found
	I1228 06:56:24.078690  243963 system_pods.go:89] "coredns-7d764666f9-dmhdv" [73a84260-cf19-47c9-a23e-616f99cb5f38] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:56:24.078696  243963 system_pods.go:89] "etcd-embed-certs-422591" [fa26dd24-514f-4ada-b710-7e65e52ccd9e] Running
	I1228 06:56:24.078700  243963 system_pods.go:89] "kindnet-9zxtp" [e5bd678d-7d09-455f-993e-2fb6a4f02111] Running
	I1228 06:56:24.078704  243963 system_pods.go:89] "kube-apiserver-embed-certs-422591" [935ffe6b-7ba7-4580-b22b-b76cd5469ae8] Running
	I1228 06:56:24.078709  243963 system_pods.go:89] "kube-controller-manager-embed-certs-422591" [39f6d823-b0f9-44c0-be26-47e85a60ca63] Running
	I1228 06:56:24.078712  243963 system_pods.go:89] "kube-proxy-j2dkd" [f64a0ce0-4a8f-4d7d-aac7-cce99ec2bdd8] Running
	I1228 06:56:24.078715  243963 system_pods.go:89] "kube-scheduler-embed-certs-422591" [ba6d5c13-ea17-4f1d-bcc9-c57e4e7ce995] Running
	I1228 06:56:24.078721  243963 system_pods.go:89] "storage-provisioner" [ac0163fe-8dd0-4650-a401-22a9a9310b5e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:56:20.144322  252331 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1228 06:56:20.148700  252331 api_server.go:325] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1228 06:56:20.148728  252331 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1228 06:56:20.644327  252331 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1228 06:56:20.648377  252331 api_server.go:325] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1228 06:56:20.649429  252331 api_server.go:141] control plane version: v1.35.0
	I1228 06:56:20.649449  252331 api_server.go:131] duration metric: took 1.005195846s to wait for apiserver health ...
	I1228 06:56:20.649458  252331 system_pods.go:43] waiting for kube-system pods to appear ...
	I1228 06:56:20.652593  252331 system_pods.go:59] 8 kube-system pods found
	I1228 06:56:20.652630  252331 system_pods.go:61] "coredns-7d764666f9-npk6g" [a3cc436b-e460-483e-99aa-f7d44599d666] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:56:20.652637  252331 system_pods.go:61] "etcd-no-preload-950460" [61fd908c-4329-4432-82b2-80206bbbb703] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 06:56:20.652644  252331 system_pods.go:61] "kindnet-xhb7x" [4bab0d9b-3499-4546-bb8c-e47bfc17dbbf] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 06:56:20.652653  252331 system_pods.go:61] "kube-apiserver-no-preload-950460" [2aeafb60-9003-44c3-b5cb-960dd4a668c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 06:56:20.652667  252331 system_pods.go:61] "kube-controller-manager-no-preload-950460" [b38f2ea3-71b8-45e0-9c27-eb7fddfc67a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 06:56:20.652675  252331 system_pods.go:61] "kube-proxy-294rn" [c88bb406-588c-45ec-9225-946af7327ec0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 06:56:20.652686  252331 system_pods.go:61] "kube-scheduler-no-preload-950460" [24b95531-e1d2-47ff-abd3-70d0cdab9fe4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 06:56:20.652694  252331 system_pods.go:61] "storage-provisioner" [a4076523-c034-4331-8dd7-a506e9dec2d9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:56:20.652703  252331 system_pods.go:74] duration metric: took 3.239436ms to wait for pod list to return data ...
	I1228 06:56:20.652715  252331 default_sa.go:34] waiting for default service account to be created ...
	I1228 06:56:20.654840  252331 default_sa.go:45] found service account: "default"
	I1228 06:56:20.654856  252331 default_sa.go:55] duration metric: took 2.135398ms for default service account to be created ...
	I1228 06:56:20.654863  252331 system_pods.go:116] waiting for k8s-apps to be running ...
	I1228 06:56:20.656911  252331 system_pods.go:86] 8 kube-system pods found
	I1228 06:56:20.656935  252331 system_pods.go:89] "coredns-7d764666f9-npk6g" [a3cc436b-e460-483e-99aa-f7d44599d666] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:56:20.656943  252331 system_pods.go:89] "etcd-no-preload-950460" [61fd908c-4329-4432-82b2-80206bbbb703] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 06:56:20.656950  252331 system_pods.go:89] "kindnet-xhb7x" [4bab0d9b-3499-4546-bb8c-e47bfc17dbbf] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 06:56:20.656955  252331 system_pods.go:89] "kube-apiserver-no-preload-950460" [2aeafb60-9003-44c3-b5cb-960dd4a668c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 06:56:20.656961  252331 system_pods.go:89] "kube-controller-manager-no-preload-950460" [b38f2ea3-71b8-45e0-9c27-eb7fddfc67a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 06:56:20.656969  252331 system_pods.go:89] "kube-proxy-294rn" [c88bb406-588c-45ec-9225-946af7327ec0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 06:56:20.656974  252331 system_pods.go:89] "kube-scheduler-no-preload-950460" [24b95531-e1d2-47ff-abd3-70d0cdab9fe4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 06:56:20.656979  252331 system_pods.go:89] "storage-provisioner" [a4076523-c034-4331-8dd7-a506e9dec2d9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:56:20.656988  252331 system_pods.go:126] duration metric: took 2.120486ms to wait for k8s-apps to be running ...
	I1228 06:56:20.656995  252331 system_svc.go:44] waiting for kubelet service to be running ....
	I1228 06:56:20.657051  252331 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:56:20.671024  252331 system_svc.go:56] duration metric: took 14.023192ms WaitForService to wait for kubelet
	I1228 06:56:20.671072  252331 kubeadm.go:587] duration metric: took 2.984480725s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 06:56:20.671093  252331 node_conditions.go:102] verifying NodePressure condition ...
	I1228 06:56:20.673706  252331 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1228 06:56:20.673727  252331 node_conditions.go:123] node cpu capacity is 8
	I1228 06:56:20.673740  252331 node_conditions.go:105] duration metric: took 2.643602ms to run NodePressure ...
	I1228 06:56:20.673752  252331 start.go:242] waiting for startup goroutines ...
	I1228 06:56:20.673758  252331 start.go:247] waiting for cluster config update ...
	I1228 06:56:20.673773  252331 start.go:256] writing updated cluster config ...
	I1228 06:56:20.674067  252331 ssh_runner.go:195] Run: rm -f paused
	I1228 06:56:20.677778  252331 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 06:56:20.681121  252331 pod_ready.go:83] waiting for pod "coredns-7d764666f9-npk6g" in "kube-system" namespace to be "Ready" or be gone ...
	W1228 06:56:22.686104  252331 pod_ready.go:104] pod "coredns-7d764666f9-npk6g" is not "Ready", error: <nil>
	W1228 06:56:22.251764  242715 pod_ready.go:104] pod "coredns-5dd5756b68-f75js" is not "Ready", error: <nil>
	W1228 06:56:24.253072  242715 pod_ready.go:104] pod "coredns-5dd5756b68-f75js" is not "Ready", error: <nil>
	I1228 06:56:24.497471  243963 system_pods.go:86] 8 kube-system pods found
	I1228 06:56:24.497502  243963 system_pods.go:89] "coredns-7d764666f9-dmhdv" [73a84260-cf19-47c9-a23e-616f99cb5f38] Running
	I1228 06:56:24.497510  243963 system_pods.go:89] "etcd-embed-certs-422591" [fa26dd24-514f-4ada-b710-7e65e52ccd9e] Running
	I1228 06:56:24.497516  243963 system_pods.go:89] "kindnet-9zxtp" [e5bd678d-7d09-455f-993e-2fb6a4f02111] Running
	I1228 06:56:24.497521  243963 system_pods.go:89] "kube-apiserver-embed-certs-422591" [935ffe6b-7ba7-4580-b22b-b76cd5469ae8] Running
	I1228 06:56:24.497528  243963 system_pods.go:89] "kube-controller-manager-embed-certs-422591" [39f6d823-b0f9-44c0-be26-47e85a60ca63] Running
	I1228 06:56:24.497533  243963 system_pods.go:89] "kube-proxy-j2dkd" [f64a0ce0-4a8f-4d7d-aac7-cce99ec2bdd8] Running
	I1228 06:56:24.497539  243963 system_pods.go:89] "kube-scheduler-embed-certs-422591" [ba6d5c13-ea17-4f1d-bcc9-c57e4e7ce995] Running
	I1228 06:56:24.497545  243963 system_pods.go:89] "storage-provisioner" [ac0163fe-8dd0-4650-a401-22a9a9310b5e] Running
	I1228 06:56:24.497556  243963 system_pods.go:126] duration metric: took 1.375053604s to wait for k8s-apps to be running ...
	I1228 06:56:24.497578  243963 system_svc.go:44] waiting for kubelet service to be running ....
	I1228 06:56:24.497628  243963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:56:24.514567  243963 system_svc.go:56] duration metric: took 16.979492ms WaitForService to wait for kubelet
	I1228 06:56:24.514605  243963 kubeadm.go:587] duration metric: took 14.145429952s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 06:56:24.514629  243963 node_conditions.go:102] verifying NodePressure condition ...
	I1228 06:56:24.518108  243963 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1228 06:56:24.518140  243963 node_conditions.go:123] node cpu capacity is 8
	I1228 06:56:24.518158  243963 node_conditions.go:105] duration metric: took 3.522325ms to run NodePressure ...
	I1228 06:56:24.518177  243963 start.go:242] waiting for startup goroutines ...
	I1228 06:56:24.518186  243963 start.go:247] waiting for cluster config update ...
	I1228 06:56:24.518200  243963 start.go:256] writing updated cluster config ...
	I1228 06:56:24.518505  243963 ssh_runner.go:195] Run: rm -f paused
	I1228 06:56:24.523480  243963 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 06:56:24.528339  243963 pod_ready.go:83] waiting for pod "coredns-7d764666f9-dmhdv" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:24.533314  243963 pod_ready.go:94] pod "coredns-7d764666f9-dmhdv" is "Ready"
	I1228 06:56:24.533340  243963 pod_ready.go:86] duration metric: took 4.973959ms for pod "coredns-7d764666f9-dmhdv" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:24.535652  243963 pod_ready.go:83] waiting for pod "etcd-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:24.540088  243963 pod_ready.go:94] pod "etcd-embed-certs-422591" is "Ready"
	I1228 06:56:24.540118  243963 pod_ready.go:86] duration metric: took 4.440493ms for pod "etcd-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:24.542361  243963 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:24.546378  243963 pod_ready.go:94] pod "kube-apiserver-embed-certs-422591" is "Ready"
	I1228 06:56:24.546401  243963 pod_ready.go:86] duration metric: took 4.016397ms for pod "kube-apiserver-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:24.548746  243963 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:24.928795  243963 pod_ready.go:94] pod "kube-controller-manager-embed-certs-422591" is "Ready"
	I1228 06:56:24.928827  243963 pod_ready.go:86] duration metric: took 380.060187ms for pod "kube-controller-manager-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:25.129424  243963 pod_ready.go:83] waiting for pod "kube-proxy-j2dkd" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:25.528796  243963 pod_ready.go:94] pod "kube-proxy-j2dkd" is "Ready"
	I1228 06:56:25.528829  243963 pod_ready.go:86] duration metric: took 399.379664ms for pod "kube-proxy-j2dkd" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:25.728149  243963 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:26.129240  243963 pod_ready.go:94] pod "kube-scheduler-embed-certs-422591" is "Ready"
	I1228 06:56:26.129352  243963 pod_ready.go:86] duration metric: took 401.16633ms for pod "kube-scheduler-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:26.129383  243963 pod_ready.go:40] duration metric: took 1.605872095s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 06:56:26.195003  243963 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1228 06:56:26.196497  243963 out.go:179] * Done! kubectl is now configured to use "embed-certs-422591" cluster and "default" namespace by default
	W1228 06:56:22.478649  247213 node_ready.go:57] node "default-k8s-diff-port-500581" has "Ready":"False" status (will retry)
	W1228 06:56:24.977721  247213 node_ready.go:57] node "default-k8s-diff-port-500581" has "Ready":"False" status (will retry)
	I1228 06:56:26.478547  247213 node_ready.go:49] node "default-k8s-diff-port-500581" is "Ready"
	I1228 06:56:26.478581  247213 node_ready.go:38] duration metric: took 12.504894114s for node "default-k8s-diff-port-500581" to be "Ready" ...
	I1228 06:56:26.478597  247213 api_server.go:52] waiting for apiserver process to appear ...
	I1228 06:56:26.478645  247213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 06:56:26.500009  247213 api_server.go:72] duration metric: took 12.844753456s to wait for apiserver process to appear ...
	I1228 06:56:26.500069  247213 api_server.go:88] waiting for apiserver healthz status ...
	I1228 06:56:26.500092  247213 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1228 06:56:26.505791  247213 api_server.go:325] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1228 06:56:26.506819  247213 api_server.go:141] control plane version: v1.35.0
	I1228 06:56:26.506850  247213 api_server.go:131] duration metric: took 6.772745ms to wait for apiserver health ...
	I1228 06:56:26.506860  247213 system_pods.go:43] waiting for kube-system pods to appear ...
	I1228 06:56:26.511152  247213 system_pods.go:59] 8 kube-system pods found
	I1228 06:56:26.511188  247213 system_pods.go:61] "coredns-7d764666f9-9glh9" [9c46cd6f-643c-4dcd-9ffd-88becb063b24] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:56:26.511196  247213 system_pods.go:61] "etcd-default-k8s-diff-port-500581" [03cb3e5a-cdc5-4c45-983f-05b0f5326abe] Running
	I1228 06:56:26.511210  247213 system_pods.go:61] "kindnet-lsrww" [b5bd3c1b-325d-46fe-9378-779822f0ba5b] Running
	I1228 06:56:26.511217  247213 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-500581" [8a2ed107-3bb0-4c8c-bdff-d7011ae71058] Running
	I1228 06:56:26.511223  247213 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-500581" [933ddecb-a0d9-44c2-abb5-d7629a8107eb] Running
	I1228 06:56:26.511228  247213 system_pods.go:61] "kube-proxy-95gmh" [f25e4b21-a201-4838-b7c9-a5fde3304662] Running
	I1228 06:56:26.511237  247213 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-500581" [35caf688-1337-46ff-b025-5d59373ae8e4] Running
	I1228 06:56:26.511245  247213 system_pods.go:61] "storage-provisioner" [12b3784f-bffe-49c1-8915-2011c07bee4e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:56:26.511257  247213 system_pods.go:74] duration metric: took 4.390309ms to wait for pod list to return data ...
	I1228 06:56:26.511272  247213 default_sa.go:34] waiting for default service account to be created ...
	I1228 06:56:26.516259  247213 default_sa.go:45] found service account: "default"
	I1228 06:56:26.516290  247213 default_sa.go:55] duration metric: took 5.010014ms for default service account to be created ...
	I1228 06:56:26.516302  247213 system_pods.go:116] waiting for k8s-apps to be running ...
	I1228 06:56:26.522640  247213 system_pods.go:86] 8 kube-system pods found
	I1228 06:56:26.522682  247213 system_pods.go:89] "coredns-7d764666f9-9glh9" [9c46cd6f-643c-4dcd-9ffd-88becb063b24] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:56:26.522692  247213 system_pods.go:89] "etcd-default-k8s-diff-port-500581" [03cb3e5a-cdc5-4c45-983f-05b0f5326abe] Running
	I1228 06:56:26.522701  247213 system_pods.go:89] "kindnet-lsrww" [b5bd3c1b-325d-46fe-9378-779822f0ba5b] Running
	I1228 06:56:26.522706  247213 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-500581" [8a2ed107-3bb0-4c8c-bdff-d7011ae71058] Running
	I1228 06:56:26.522712  247213 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-500581" [933ddecb-a0d9-44c2-abb5-d7629a8107eb] Running
	I1228 06:56:26.522718  247213 system_pods.go:89] "kube-proxy-95gmh" [f25e4b21-a201-4838-b7c9-a5fde3304662] Running
	I1228 06:56:26.522725  247213 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-500581" [35caf688-1337-46ff-b025-5d59373ae8e4] Running
	I1228 06:56:26.522732  247213 system_pods.go:89] "storage-provisioner" [12b3784f-bffe-49c1-8915-2011c07bee4e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:56:26.522761  247213 retry.go:84] will retry after 200ms: missing components: kube-dns
	I1228 06:56:26.727648  247213 system_pods.go:86] 8 kube-system pods found
	I1228 06:56:26.727695  247213 system_pods.go:89] "coredns-7d764666f9-9glh9" [9c46cd6f-643c-4dcd-9ffd-88becb063b24] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:56:26.727705  247213 system_pods.go:89] "etcd-default-k8s-diff-port-500581" [03cb3e5a-cdc5-4c45-983f-05b0f5326abe] Running
	I1228 06:56:26.727714  247213 system_pods.go:89] "kindnet-lsrww" [b5bd3c1b-325d-46fe-9378-779822f0ba5b] Running
	I1228 06:56:26.727719  247213 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-500581" [8a2ed107-3bb0-4c8c-bdff-d7011ae71058] Running
	I1228 06:56:26.727726  247213 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-500581" [933ddecb-a0d9-44c2-abb5-d7629a8107eb] Running
	I1228 06:56:26.727733  247213 system_pods.go:89] "kube-proxy-95gmh" [f25e4b21-a201-4838-b7c9-a5fde3304662] Running
	I1228 06:56:26.727739  247213 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-500581" [35caf688-1337-46ff-b025-5d59373ae8e4] Running
	I1228 06:56:26.727753  247213 system_pods.go:89] "storage-provisioner" [12b3784f-bffe-49c1-8915-2011c07bee4e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:56:27.048953  247213 system_pods.go:86] 8 kube-system pods found
	I1228 06:56:27.048983  247213 system_pods.go:89] "coredns-7d764666f9-9glh9" [9c46cd6f-643c-4dcd-9ffd-88becb063b24] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:56:27.048988  247213 system_pods.go:89] "etcd-default-k8s-diff-port-500581" [03cb3e5a-cdc5-4c45-983f-05b0f5326abe] Running
	I1228 06:56:27.048995  247213 system_pods.go:89] "kindnet-lsrww" [b5bd3c1b-325d-46fe-9378-779822f0ba5b] Running
	I1228 06:56:27.048999  247213 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-500581" [8a2ed107-3bb0-4c8c-bdff-d7011ae71058] Running
	I1228 06:56:27.049002  247213 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-500581" [933ddecb-a0d9-44c2-abb5-d7629a8107eb] Running
	I1228 06:56:27.049006  247213 system_pods.go:89] "kube-proxy-95gmh" [f25e4b21-a201-4838-b7c9-a5fde3304662] Running
	I1228 06:56:27.049012  247213 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-500581" [35caf688-1337-46ff-b025-5d59373ae8e4] Running
	I1228 06:56:27.049019  247213 system_pods.go:89] "storage-provisioner" [12b3784f-bffe-49c1-8915-2011c07bee4e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:56:27.347697  247213 system_pods.go:86] 8 kube-system pods found
	I1228 06:56:27.347744  247213 system_pods.go:89] "coredns-7d764666f9-9glh9" [9c46cd6f-643c-4dcd-9ffd-88becb063b24] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:56:27.347753  247213 system_pods.go:89] "etcd-default-k8s-diff-port-500581" [03cb3e5a-cdc5-4c45-983f-05b0f5326abe] Running
	I1228 06:56:27.347761  247213 system_pods.go:89] "kindnet-lsrww" [b5bd3c1b-325d-46fe-9378-779822f0ba5b] Running
	I1228 06:56:27.347767  247213 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-500581" [8a2ed107-3bb0-4c8c-bdff-d7011ae71058] Running
	I1228 06:56:27.347773  247213 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-500581" [933ddecb-a0d9-44c2-abb5-d7629a8107eb] Running
	I1228 06:56:27.347779  247213 system_pods.go:89] "kube-proxy-95gmh" [f25e4b21-a201-4838-b7c9-a5fde3304662] Running
	I1228 06:56:27.347784  247213 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-500581" [35caf688-1337-46ff-b025-5d59373ae8e4] Running
	I1228 06:56:27.347792  247213 system_pods.go:89] "storage-provisioner" [12b3784f-bffe-49c1-8915-2011c07bee4e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:56:27.894612  247213 system_pods.go:86] 8 kube-system pods found
	I1228 06:56:27.894645  247213 system_pods.go:89] "coredns-7d764666f9-9glh9" [9c46cd6f-643c-4dcd-9ffd-88becb063b24] Running
	I1228 06:56:27.894654  247213 system_pods.go:89] "etcd-default-k8s-diff-port-500581" [03cb3e5a-cdc5-4c45-983f-05b0f5326abe] Running
	I1228 06:56:27.894661  247213 system_pods.go:89] "kindnet-lsrww" [b5bd3c1b-325d-46fe-9378-779822f0ba5b] Running
	I1228 06:56:27.894668  247213 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-500581" [8a2ed107-3bb0-4c8c-bdff-d7011ae71058] Running
	I1228 06:56:27.894674  247213 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-500581" [933ddecb-a0d9-44c2-abb5-d7629a8107eb] Running
	I1228 06:56:27.894747  247213 system_pods.go:89] "kube-proxy-95gmh" [f25e4b21-a201-4838-b7c9-a5fde3304662] Running
	I1228 06:56:27.894780  247213 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-500581" [35caf688-1337-46ff-b025-5d59373ae8e4] Running
	I1228 06:56:27.894786  247213 system_pods.go:89] "storage-provisioner" [12b3784f-bffe-49c1-8915-2011c07bee4e] Running
	I1228 06:56:27.894796  247213 system_pods.go:126] duration metric: took 1.378485807s to wait for k8s-apps to be running ...
	I1228 06:56:27.894807  247213 system_svc.go:44] waiting for kubelet service to be running ....
	I1228 06:56:27.894877  247213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:56:27.913725  247213 system_svc.go:56] duration metric: took 18.908162ms WaitForService to wait for kubelet
	I1228 06:56:27.913765  247213 kubeadm.go:587] duration metric: took 14.258529006s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 06:56:27.913788  247213 node_conditions.go:102] verifying NodePressure condition ...
	I1228 06:56:27.917024  247213 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1228 06:56:27.917082  247213 node_conditions.go:123] node cpu capacity is 8
	I1228 06:56:27.917101  247213 node_conditions.go:105] duration metric: took 3.307449ms to run NodePressure ...
	I1228 06:56:27.917117  247213 start.go:242] waiting for startup goroutines ...
	I1228 06:56:27.917128  247213 start.go:247] waiting for cluster config update ...
	I1228 06:56:27.917147  247213 start.go:256] writing updated cluster config ...
	I1228 06:56:27.917432  247213 ssh_runner.go:195] Run: rm -f paused
	I1228 06:56:27.922292  247213 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 06:56:27.928675  247213 pod_ready.go:83] waiting for pod "coredns-7d764666f9-9glh9" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:27.933976  247213 pod_ready.go:94] pod "coredns-7d764666f9-9glh9" is "Ready"
	I1228 06:56:27.934000  247213 pod_ready.go:86] duration metric: took 5.293782ms for pod "coredns-7d764666f9-9glh9" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:27.952822  247213 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:27.957941  247213 pod_ready.go:94] pod "etcd-default-k8s-diff-port-500581" is "Ready"
	I1228 06:56:27.957969  247213 pod_ready.go:86] duration metric: took 5.117578ms for pod "etcd-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:27.960256  247213 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:27.964517  247213 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-500581" is "Ready"
	I1228 06:56:27.964541  247213 pod_ready.go:86] duration metric: took 4.26155ms for pod "kube-apiserver-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:27.966612  247213 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:28.326675  247213 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-500581" is "Ready"
	I1228 06:56:28.326711  247213 pod_ready.go:86] duration metric: took 360.070556ms for pod "kube-controller-manager-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:28.527492  247213 pod_ready.go:83] waiting for pod "kube-proxy-95gmh" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:28.926562  247213 pod_ready.go:94] pod "kube-proxy-95gmh" is "Ready"
	I1228 06:56:28.926586  247213 pod_ready.go:86] duration metric: took 398.654778ms for pod "kube-proxy-95gmh" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:29.128257  247213 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:29.527347  247213 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-500581" is "Ready"
	I1228 06:56:29.527373  247213 pod_ready.go:86] duration metric: took 399.091542ms for pod "kube-scheduler-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:29.527384  247213 pod_ready.go:40] duration metric: took 1.605062412s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 06:56:29.572470  247213 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1228 06:56:29.574045  247213 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-500581" cluster and "default" namespace by default
	W1228 06:56:24.687607  252331 pod_ready.go:104] pod "coredns-7d764666f9-npk6g" is not "Ready", error: <nil>
	W1228 06:56:27.187235  252331 pod_ready.go:104] pod "coredns-7d764666f9-npk6g" is not "Ready", error: <nil>
	W1228 06:56:26.754423  242715 pod_ready.go:104] pod "coredns-5dd5756b68-f75js" is not "Ready", error: <nil>
	I1228 06:56:28.252283  242715 pod_ready.go:94] pod "coredns-5dd5756b68-f75js" is "Ready"
	I1228 06:56:28.252312  242715 pod_ready.go:86] duration metric: took 34.005583819s for pod "coredns-5dd5756b68-f75js" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:28.255219  242715 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-694122" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:28.259146  242715 pod_ready.go:94] pod "etcd-old-k8s-version-694122" is "Ready"
	I1228 06:56:28.259168  242715 pod_ready.go:86] duration metric: took 3.930339ms for pod "etcd-old-k8s-version-694122" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:28.261639  242715 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-694122" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:28.265232  242715 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-694122" is "Ready"
	I1228 06:56:28.265251  242715 pod_ready.go:86] duration metric: took 3.589847ms for pod "kube-apiserver-old-k8s-version-694122" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:28.267802  242715 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-694122" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:28.450233  242715 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-694122" is "Ready"
	I1228 06:56:28.450266  242715 pod_ready.go:86] duration metric: took 182.442698ms for pod "kube-controller-manager-old-k8s-version-694122" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:28.651005  242715 pod_ready.go:83] waiting for pod "kube-proxy-ckjcc" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:29.050020  242715 pod_ready.go:94] pod "kube-proxy-ckjcc" is "Ready"
	I1228 06:56:29.050071  242715 pod_ready.go:86] duration metric: took 399.008645ms for pod "kube-proxy-ckjcc" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:29.250805  242715 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-694122" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:29.650219  242715 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-694122" is "Ready"
	I1228 06:56:29.650260  242715 pod_ready.go:86] duration metric: took 399.415539ms for pod "kube-scheduler-old-k8s-version-694122" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:29.650277  242715 pod_ready.go:40] duration metric: took 35.408765036s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 06:56:29.699567  242715 start.go:625] kubectl: 1.35.0, cluster: 1.28.0 (minor skew: 7)
	I1228 06:56:29.701172  242715 out.go:203] 
	W1228 06:56:29.702316  242715 out.go:285] ! /usr/local/bin/kubectl is version 1.35.0, which may have incompatibilities with Kubernetes 1.28.0.
	I1228 06:56:29.703412  242715 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1228 06:56:29.704563  242715 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-694122" cluster and "default" namespace by default
	W1228 06:56:29.687654  252331 pod_ready.go:104] pod "coredns-7d764666f9-npk6g" is not "Ready", error: <nil>
	W1228 06:56:32.186292  252331 pod_ready.go:104] pod "coredns-7d764666f9-npk6g" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 28 06:56:26 default-k8s-diff-port-500581 crio[773]: time="2025-12-28T06:56:26.546579058Z" level=info msg="Starting container: 81cbcff11564f3e7770a7cd11e184363bf61aa06c5a6059b7a49a60eb96e2a9a" id=cf3a3aca-9174-400d-8bd9-685385548116 name=/runtime.v1.RuntimeService/StartContainer
	Dec 28 06:56:26 default-k8s-diff-port-500581 crio[773]: time="2025-12-28T06:56:26.54907499Z" level=info msg="Started container" PID=1895 containerID=81cbcff11564f3e7770a7cd11e184363bf61aa06c5a6059b7a49a60eb96e2a9a description=kube-system/coredns-7d764666f9-9glh9/coredns id=cf3a3aca-9174-400d-8bd9-685385548116 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4386a8d4503c9e572b13a6554cec61351a685c5298ad5a51e81f9bf0f3cb62cc
	Dec 28 06:56:30 default-k8s-diff-port-500581 crio[773]: time="2025-12-28T06:56:30.043285878Z" level=info msg="Running pod sandbox: default/busybox/POD" id=cb62fc97-19dd-4b78-b758-1d28e96aa8db name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 28 06:56:30 default-k8s-diff-port-500581 crio[773]: time="2025-12-28T06:56:30.043407035Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:56:30 default-k8s-diff-port-500581 crio[773]: time="2025-12-28T06:56:30.048681948Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:956e960aaae0708bcfeefb37bc566ecf9d8f85d521b317e93a9f4cec54078a55 UID:68eee6fa-3951-4c02-bfa6-e8dd801288c4 NetNS:/var/run/netns/31402b3c-2fcf-44a3-8aac-69b8f1ac2e1e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00070a318}] Aliases:map[]}"
	Dec 28 06:56:30 default-k8s-diff-port-500581 crio[773]: time="2025-12-28T06:56:30.048714599Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 28 06:56:30 default-k8s-diff-port-500581 crio[773]: time="2025-12-28T06:56:30.064637572Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:956e960aaae0708bcfeefb37bc566ecf9d8f85d521b317e93a9f4cec54078a55 UID:68eee6fa-3951-4c02-bfa6-e8dd801288c4 NetNS:/var/run/netns/31402b3c-2fcf-44a3-8aac-69b8f1ac2e1e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00070a318}] Aliases:map[]}"
	Dec 28 06:56:30 default-k8s-diff-port-500581 crio[773]: time="2025-12-28T06:56:30.064784713Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 28 06:56:30 default-k8s-diff-port-500581 crio[773]: time="2025-12-28T06:56:30.065609625Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 28 06:56:30 default-k8s-diff-port-500581 crio[773]: time="2025-12-28T06:56:30.066501195Z" level=info msg="Ran pod sandbox 956e960aaae0708bcfeefb37bc566ecf9d8f85d521b317e93a9f4cec54078a55 with infra container: default/busybox/POD" id=cb62fc97-19dd-4b78-b758-1d28e96aa8db name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 28 06:56:30 default-k8s-diff-port-500581 crio[773]: time="2025-12-28T06:56:30.067820622Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c2785727-dcb1-4123-b69e-992b7ec613cb name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:56:30 default-k8s-diff-port-500581 crio[773]: time="2025-12-28T06:56:30.067996862Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=c2785727-dcb1-4123-b69e-992b7ec613cb name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:56:30 default-k8s-diff-port-500581 crio[773]: time="2025-12-28T06:56:30.068096859Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=c2785727-dcb1-4123-b69e-992b7ec613cb name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:56:30 default-k8s-diff-port-500581 crio[773]: time="2025-12-28T06:56:30.06892146Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0e453b02-86bd-4d8d-b117-f98c93c87f23 name=/runtime.v1.ImageService/PullImage
	Dec 28 06:56:30 default-k8s-diff-port-500581 crio[773]: time="2025-12-28T06:56:30.069274033Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 28 06:56:31 default-k8s-diff-port-500581 crio[773]: time="2025-12-28T06:56:31.342049356Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=0e453b02-86bd-4d8d-b117-f98c93c87f23 name=/runtime.v1.ImageService/PullImage
	Dec 28 06:56:31 default-k8s-diff-port-500581 crio[773]: time="2025-12-28T06:56:31.342652892Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4824be1b-73dd-4126-bfb4-4f9ef9372a71 name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:56:31 default-k8s-diff-port-500581 crio[773]: time="2025-12-28T06:56:31.344361763Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5349f017-f902-4998-84c6-64e97b993227 name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:56:31 default-k8s-diff-port-500581 crio[773]: time="2025-12-28T06:56:31.347591446Z" level=info msg="Creating container: default/busybox/busybox" id=777212c0-5197-4dcf-bc96-358a57a663ac name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:56:31 default-k8s-diff-port-500581 crio[773]: time="2025-12-28T06:56:31.347777034Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:56:31 default-k8s-diff-port-500581 crio[773]: time="2025-12-28T06:56:31.351544125Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:56:31 default-k8s-diff-port-500581 crio[773]: time="2025-12-28T06:56:31.351946909Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:56:31 default-k8s-diff-port-500581 crio[773]: time="2025-12-28T06:56:31.380284644Z" level=info msg="Created container 9301ac6fd47e2765e15716ba756b72ea350516e41c03e73f876ff54765144f10: default/busybox/busybox" id=777212c0-5197-4dcf-bc96-358a57a663ac name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:56:31 default-k8s-diff-port-500581 crio[773]: time="2025-12-28T06:56:31.380911845Z" level=info msg="Starting container: 9301ac6fd47e2765e15716ba756b72ea350516e41c03e73f876ff54765144f10" id=b64a6d3c-d22c-4a84-b0c2-a9ffdb15ff90 name=/runtime.v1.RuntimeService/StartContainer
	Dec 28 06:56:31 default-k8s-diff-port-500581 crio[773]: time="2025-12-28T06:56:31.382519072Z" level=info msg="Started container" PID=1981 containerID=9301ac6fd47e2765e15716ba756b72ea350516e41c03e73f876ff54765144f10 description=default/busybox/busybox id=b64a6d3c-d22c-4a84-b0c2-a9ffdb15ff90 name=/runtime.v1.RuntimeService/StartContainer sandboxID=956e960aaae0708bcfeefb37bc566ecf9d8f85d521b317e93a9f4cec54078a55
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	9301ac6fd47e2       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   956e960aaae07       busybox                                                default
	81cbcff11564f       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      11 seconds ago      Running             coredns                   0                   4386a8d4503c9       coredns-7d764666f9-9glh9                               kube-system
	94e32989c2e09       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   8969f206f10c1       storage-provisioner                                    kube-system
	e4f69991099dc       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    22 seconds ago      Running             kindnet-cni               0                   2f471042ac7fa       kindnet-lsrww                                          kube-system
	1d757936c70f6       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                      24 seconds ago      Running             kube-proxy                0                   61149d4dcb6fe       kube-proxy-95gmh                                       kube-system
	b8bfffebb1d80       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                      34 seconds ago      Running             etcd                      0                   a6ffcadda709c       etcd-default-k8s-diff-port-500581                      kube-system
	ffe9c939a4044       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                      34 seconds ago      Running             kube-scheduler            0                   1de5b4ccc17d9       kube-scheduler-default-k8s-diff-port-500581            kube-system
	1402288108aca       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                      34 seconds ago      Running             kube-apiserver            0                   fc329a3252f13       kube-apiserver-default-k8s-diff-port-500581            kube-system
	6e4217d4492dd       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                      34 seconds ago      Running             kube-controller-manager   0                   7fce379efc396       kube-controller-manager-default-k8s-diff-port-500581   kube-system
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-500581
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-500581
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba
	                    minikube.k8s.io/name=default-k8s-diff-port-500581
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_28T06_56_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Dec 2025 06:56:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-500581
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 28 Dec 2025 06:56:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 28 Dec 2025 06:56:26 +0000   Sun, 28 Dec 2025 06:56:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 28 Dec 2025 06:56:26 +0000   Sun, 28 Dec 2025 06:56:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 28 Dec 2025 06:56:26 +0000   Sun, 28 Dec 2025 06:56:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 28 Dec 2025 06:56:26 +0000   Sun, 28 Dec 2025 06:56:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-500581
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 493159aea3d8b8768b108b926950835d
	  System UUID:                b3bebd8a-2cf1-4ff4-9600-b6e76b191bd7
	  Boot ID:                    e7a1d175-ccf2-4135-b9c7-3a9f70f4c4af
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-7d764666f9-9glh9                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-default-k8s-diff-port-500581                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-lsrww                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-default-k8s-diff-port-500581             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-500581    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-95gmh                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-default-k8s-diff-port-500581             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  26s   node-controller  Node default-k8s-diff-port-500581 event: Registered Node default-k8s-diff-port-500581 in Controller
	
	
	==> dmesg <==
	[Dec28 06:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001811] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000998] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.386099] i8042: Warning: Keylock active
	[  +0.010472] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.485785] block sda: the capability attribute has been deprecated.
	[  +0.082391] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024584] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.071522] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 06:56:38 up 39 min,  0 user,  load average: 3.35, 2.70, 1.73
	Linux default-k8s-diff-port-500581 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 28 06:56:13 default-k8s-diff-port-500581 kubelet[1310]: I1228 06:56:13.766345    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b5bd3c1b-325d-46fe-9378-779822f0ba5b-cni-cfg\") pod \"kindnet-lsrww\" (UID: \"b5bd3c1b-325d-46fe-9378-779822f0ba5b\") " pod="kube-system/kindnet-lsrww"
	Dec 28 06:56:13 default-k8s-diff-port-500581 kubelet[1310]: I1228 06:56:13.766381    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5bd3c1b-325d-46fe-9378-779822f0ba5b-xtables-lock\") pod \"kindnet-lsrww\" (UID: \"b5bd3c1b-325d-46fe-9378-779822f0ba5b\") " pod="kube-system/kindnet-lsrww"
	Dec 28 06:56:13 default-k8s-diff-port-500581 kubelet[1310]: I1228 06:56:13.766410    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f25e4b21-a201-4838-b7c9-a5fde3304662-lib-modules\") pod \"kube-proxy-95gmh\" (UID: \"f25e4b21-a201-4838-b7c9-a5fde3304662\") " pod="kube-system/kube-proxy-95gmh"
	Dec 28 06:56:13 default-k8s-diff-port-500581 kubelet[1310]: I1228 06:56:13.766487    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq8jr\" (UniqueName: \"kubernetes.io/projected/f25e4b21-a201-4838-b7c9-a5fde3304662-kube-api-access-kq8jr\") pod \"kube-proxy-95gmh\" (UID: \"f25e4b21-a201-4838-b7c9-a5fde3304662\") " pod="kube-system/kube-proxy-95gmh"
	Dec 28 06:56:13 default-k8s-diff-port-500581 kubelet[1310]: I1228 06:56:13.766560    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5bd3c1b-325d-46fe-9378-779822f0ba5b-lib-modules\") pod \"kindnet-lsrww\" (UID: \"b5bd3c1b-325d-46fe-9378-779822f0ba5b\") " pod="kube-system/kindnet-lsrww"
	Dec 28 06:56:13 default-k8s-diff-port-500581 kubelet[1310]: I1228 06:56:13.766633    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnqq7\" (UniqueName: \"kubernetes.io/projected/b5bd3c1b-325d-46fe-9378-779822f0ba5b-kube-api-access-nnqq7\") pod \"kindnet-lsrww\" (UID: \"b5bd3c1b-325d-46fe-9378-779822f0ba5b\") " pod="kube-system/kindnet-lsrww"
	Dec 28 06:56:14 default-k8s-diff-port-500581 kubelet[1310]: I1228 06:56:14.411584    1310 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-95gmh" podStartSLOduration=1.411566877 podStartE2EDuration="1.411566877s" podCreationTimestamp="2025-12-28 06:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-28 06:56:14.411497522 +0000 UTC m=+6.183042175" watchObservedRunningTime="2025-12-28 06:56:14.411566877 +0000 UTC m=+6.183111537"
	Dec 28 06:56:14 default-k8s-diff-port-500581 kubelet[1310]: E1228 06:56:14.532305    1310 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-default-k8s-diff-port-500581" containerName="kube-apiserver"
	Dec 28 06:56:16 default-k8s-diff-port-500581 kubelet[1310]: I1228 06:56:16.415888    1310 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-lsrww" podStartSLOduration=2.060652135 podStartE2EDuration="3.415867992s" podCreationTimestamp="2025-12-28 06:56:13 +0000 UTC" firstStartedPulling="2025-12-28 06:56:14.036745953 +0000 UTC m=+5.808290595" lastFinishedPulling="2025-12-28 06:56:15.391961804 +0000 UTC m=+7.163506452" observedRunningTime="2025-12-28 06:56:16.415747626 +0000 UTC m=+8.187292284" watchObservedRunningTime="2025-12-28 06:56:16.415867992 +0000 UTC m=+8.187412650"
	Dec 28 06:56:21 default-k8s-diff-port-500581 kubelet[1310]: E1228 06:56:21.712116    1310 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-default-k8s-diff-port-500581" containerName="kube-scheduler"
	Dec 28 06:56:22 default-k8s-diff-port-500581 kubelet[1310]: E1228 06:56:22.434636    1310 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-default-k8s-diff-port-500581" containerName="etcd"
	Dec 28 06:56:23 default-k8s-diff-port-500581 kubelet[1310]: E1228 06:56:23.066918    1310 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-default-k8s-diff-port-500581" containerName="kube-controller-manager"
	Dec 28 06:56:24 default-k8s-diff-port-500581 kubelet[1310]: E1228 06:56:24.539731    1310 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-default-k8s-diff-port-500581" containerName="kube-apiserver"
	Dec 28 06:56:26 default-k8s-diff-port-500581 kubelet[1310]: I1228 06:56:26.139768    1310 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 28 06:56:26 default-k8s-diff-port-500581 kubelet[1310]: I1228 06:56:26.262626    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c46cd6f-643c-4dcd-9ffd-88becb063b24-config-volume\") pod \"coredns-7d764666f9-9glh9\" (UID: \"9c46cd6f-643c-4dcd-9ffd-88becb063b24\") " pod="kube-system/coredns-7d764666f9-9glh9"
	Dec 28 06:56:26 default-k8s-diff-port-500581 kubelet[1310]: I1228 06:56:26.262695    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/12b3784f-bffe-49c1-8915-2011c07bee4e-tmp\") pod \"storage-provisioner\" (UID: \"12b3784f-bffe-49c1-8915-2011c07bee4e\") " pod="kube-system/storage-provisioner"
	Dec 28 06:56:26 default-k8s-diff-port-500581 kubelet[1310]: I1228 06:56:26.262721    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djpr7\" (UniqueName: \"kubernetes.io/projected/12b3784f-bffe-49c1-8915-2011c07bee4e-kube-api-access-djpr7\") pod \"storage-provisioner\" (UID: \"12b3784f-bffe-49c1-8915-2011c07bee4e\") " pod="kube-system/storage-provisioner"
	Dec 28 06:56:26 default-k8s-diff-port-500581 kubelet[1310]: I1228 06:56:26.262830    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfjn4\" (UniqueName: \"kubernetes.io/projected/9c46cd6f-643c-4dcd-9ffd-88becb063b24-kube-api-access-sfjn4\") pod \"coredns-7d764666f9-9glh9\" (UID: \"9c46cd6f-643c-4dcd-9ffd-88becb063b24\") " pod="kube-system/coredns-7d764666f9-9glh9"
	Dec 28 06:56:27 default-k8s-diff-port-500581 kubelet[1310]: E1228 06:56:27.431642    1310 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-9glh9" containerName="coredns"
	Dec 28 06:56:27 default-k8s-diff-port-500581 kubelet[1310]: I1228 06:56:27.473838    1310 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-9glh9" podStartSLOduration=14.473816824 podStartE2EDuration="14.473816824s" podCreationTimestamp="2025-12-28 06:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-28 06:56:27.462808098 +0000 UTC m=+19.234352756" watchObservedRunningTime="2025-12-28 06:56:27.473816824 +0000 UTC m=+19.245361483"
	Dec 28 06:56:27 default-k8s-diff-port-500581 kubelet[1310]: I1228 06:56:27.487322    1310 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.487302653 podStartE2EDuration="13.487302653s" podCreationTimestamp="2025-12-28 06:56:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-28 06:56:27.474731697 +0000 UTC m=+19.246276357" watchObservedRunningTime="2025-12-28 06:56:27.487302653 +0000 UTC m=+19.258847312"
	Dec 28 06:56:28 default-k8s-diff-port-500581 kubelet[1310]: E1228 06:56:28.437559    1310 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-9glh9" containerName="coredns"
	Dec 28 06:56:29 default-k8s-diff-port-500581 kubelet[1310]: E1228 06:56:29.439290    1310 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-9glh9" containerName="coredns"
	Dec 28 06:56:29 default-k8s-diff-port-500581 kubelet[1310]: I1228 06:56:29.783414    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8nnr\" (UniqueName: \"kubernetes.io/projected/68eee6fa-3951-4c02-bfa6-e8dd801288c4-kube-api-access-x8nnr\") pod \"busybox\" (UID: \"68eee6fa-3951-4c02-bfa6-e8dd801288c4\") " pod="default/busybox"
	Dec 28 06:56:31 default-k8s-diff-port-500581 kubelet[1310]: I1228 06:56:31.454987    1310 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.179987784 podStartE2EDuration="2.454967662s" podCreationTimestamp="2025-12-28 06:56:29 +0000 UTC" firstStartedPulling="2025-12-28 06:56:30.068539437 +0000 UTC m=+21.840084077" lastFinishedPulling="2025-12-28 06:56:31.343519296 +0000 UTC m=+23.115063955" observedRunningTime="2025-12-28 06:56:31.454814794 +0000 UTC m=+23.226359452" watchObservedRunningTime="2025-12-28 06:56:31.454967662 +0000 UTC m=+23.226512332"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1228 06:56:37.857287  257298 logs.go:279] Failed to list containers for "kube-apiserver": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:37Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:56:37.925956  257298 logs.go:279] Failed to list containers for "etcd": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:37Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:56:37.994662  257298 logs.go:279] Failed to list containers for "coredns": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:37Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:56:38.066699  257298 logs.go:279] Failed to list containers for "kube-scheduler": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:38Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:56:38.133818  257298 logs.go:279] Failed to list containers for "kube-proxy": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:38Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:56:38.205543  257298 logs.go:279] Failed to list containers for "kube-controller-manager": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:38Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:56:38.266335  257298 logs.go:279] Failed to list containers for "kindnet": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:38Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:56:38.328177  257298 logs.go:279] Failed to list containers for "storage-provisioner": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:38Z" level=error msg="open /run/runc: no such file or directory"

                                                
                                                
** /stderr **
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-500581 -n default-k8s-diff-port-500581
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-500581 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-694122 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-694122 --alsologtostderr -v=1: exit status 80 (2.16394806s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-694122 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:56:41.391439  258291 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:56:41.391704  258291 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:56:41.391715  258291 out.go:374] Setting ErrFile to fd 2...
	I1228 06:56:41.391720  258291 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:56:41.391897  258291 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:56:41.392133  258291 out.go:368] Setting JSON to false
	I1228 06:56:41.392159  258291 mustload.go:66] Loading cluster: old-k8s-version-694122
	I1228 06:56:41.392519  258291 config.go:182] Loaded profile config "old-k8s-version-694122": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1228 06:56:41.392937  258291 cli_runner.go:164] Run: docker container inspect old-k8s-version-694122 --format={{.State.Status}}
	I1228 06:56:41.411373  258291 host.go:66] Checking if "old-k8s-version-694122" exists ...
	I1228 06:56:41.411667  258291 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:56:41.467697  258291 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:84 OomKillDisable:false NGoroutines:90 SystemTime:2025-12-28 06:56:41.458020305 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:56:41.468373  258291 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22351/minikube-v1.37.0-1766883634-22351-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766883634-22351/minikube-v1.37.0-1766883634-22351-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766883634-22351-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:old-k8s-version-694122 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s
(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1228 06:56:41.470177  258291 out.go:179] * Pausing node old-k8s-version-694122 ... 
	I1228 06:56:41.471254  258291 host.go:66] Checking if "old-k8s-version-694122" exists ...
	I1228 06:56:41.471490  258291 ssh_runner.go:195] Run: systemctl --version
	I1228 06:56:41.471529  258291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-694122
	I1228 06:56:41.489154  258291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/old-k8s-version-694122/id_rsa Username:docker}
	I1228 06:56:41.577726  258291 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:56:41.598282  258291 pause.go:52] kubelet running: true
	I1228 06:56:41.598376  258291 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1228 06:56:41.758683  258291 ssh_runner.go:195] Run: sudo crio config
	I1228 06:56:41.807641  258291 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:56:41.819306  258291 retry.go:84] will retry after 200ms: list running: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:41Z" level=error msg="open /run/runc: no such file or directory"
	I1228 06:56:42.051834  258291 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:56:42.064778  258291 pause.go:52] kubelet running: false
	I1228 06:56:42.064832  258291 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1228 06:56:42.207332  258291 ssh_runner.go:195] Run: sudo crio config
	I1228 06:56:42.259073  258291 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:56:42.719062  258291 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:56:42.731813  258291 pause.go:52] kubelet running: false
	I1228 06:56:42.731870  258291 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1228 06:56:42.871343  258291 ssh_runner.go:195] Run: sudo crio config
	I1228 06:56:42.920533  258291 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:56:43.281536  258291 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:56:43.294519  258291 pause.go:52] kubelet running: false
	I1228 06:56:43.294567  258291 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1228 06:56:43.429869  258291 ssh_runner.go:195] Run: sudo crio config
	I1228 06:56:43.480239  258291 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:56:43.494239  258291 out.go:203] 
	W1228 06:56:43.495327  258291 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:43Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:43Z" level=error msg="open /run/runc: no such file or directory"
	
	W1228 06:56:43.495342  258291 out.go:285] * 
	* 
	W1228 06:56:43.496990  258291 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1228 06:56:43.499005  258291 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-694122 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-694122
helpers_test.go:244: (dbg) docker inspect old-k8s-version-694122:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0dd1cc4ae5d6c069007f47d3844c99e6fd488856031b6098669f2a2d9266b8e4",
	        "Created": "2025-12-28T06:54:32.483449473Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 242921,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-28T06:55:40.77317765Z",
	            "FinishedAt": "2025-12-28T06:55:39.878075511Z"
	        },
	        "Image": "sha256:8b8cccb9afb2a57c3d011fcf33e0403b1551aa7036e30b12a395646869801935",
	        "ResolvConfPath": "/var/lib/docker/containers/0dd1cc4ae5d6c069007f47d3844c99e6fd488856031b6098669f2a2d9266b8e4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0dd1cc4ae5d6c069007f47d3844c99e6fd488856031b6098669f2a2d9266b8e4/hostname",
	        "HostsPath": "/var/lib/docker/containers/0dd1cc4ae5d6c069007f47d3844c99e6fd488856031b6098669f2a2d9266b8e4/hosts",
	        "LogPath": "/var/lib/docker/containers/0dd1cc4ae5d6c069007f47d3844c99e6fd488856031b6098669f2a2d9266b8e4/0dd1cc4ae5d6c069007f47d3844c99e6fd488856031b6098669f2a2d9266b8e4-json.log",
	        "Name": "/old-k8s-version-694122",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-694122:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-694122",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0dd1cc4ae5d6c069007f47d3844c99e6fd488856031b6098669f2a2d9266b8e4",
	                "LowerDir": "/var/lib/docker/overlay2/0e198016d10833ae2b69d72eb0480c9e3ae293195212da3a517ed434306dae9b-init/diff:/var/lib/docker/overlay2/69e554713d6cc3cb33e7ea5f93430536a8ca0db38320574d3719c26f00b2f62c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0e198016d10833ae2b69d72eb0480c9e3ae293195212da3a517ed434306dae9b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0e198016d10833ae2b69d72eb0480c9e3ae293195212da3a517ed434306dae9b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0e198016d10833ae2b69d72eb0480c9e3ae293195212da3a517ed434306dae9b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-694122",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-694122/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-694122",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-694122",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-694122",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "f8a2c1f2ca0edda7fc61319821d1b1b9478e21a8166e58c5ceefe4687ad1185e",
	            "SandboxKey": "/var/run/docker/netns/f8a2c1f2ca0e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-694122": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "910bcfa8529441ad2bfa62f448459947be2ed515eaa365c95b9fc10d53f59423",
	                    "EndpointID": "c5067af631bf950ff3e81937626ccb025d7006ff6324b6a2e17ab1a68dd827b4",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "4a:77:23:35:01:07",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-694122",
	                        "0dd1cc4ae5d6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-694122 -n old-k8s-version-694122
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-694122 -n old-k8s-version-694122: exit status 2 (322.615912ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-694122 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-694122 logs -n 25: (1.04591836s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p test-preload-785573                                                                                                                                                                                                                        │ test-preload-785573          │ jenkins │ v1.37.0 │ 28 Dec 25 06:54 UTC │ 28 Dec 25 06:54 UTC │
	│ start   │ -p cert-expiration-623987 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-623987       │ jenkins │ v1.37.0 │ 28 Dec 25 06:54 UTC │ 28 Dec 25 06:54 UTC │
	│ start   │ -p test-preload-785573 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio                                                                                                                            │ test-preload-785573          │ jenkins │ v1.37.0 │ 28 Dec 25 06:54 UTC │ 28 Dec 25 06:55 UTC │
	│ delete  │ -p cert-expiration-623987                                                                                                                                                                                                                     │ cert-expiration-623987       │ jenkins │ v1.37.0 │ 28 Dec 25 06:54 UTC │ 28 Dec 25 06:54 UTC │
	│ start   │ -p no-preload-950460 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-694122 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │                     │
	│ stop    │ -p old-k8s-version-694122 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-694122 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ start   │ -p old-k8s-version-694122 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:56 UTC │
	│ image   │ test-preload-785573 image list                                                                                                                                                                                                                │ test-preload-785573          │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ delete  │ -p test-preload-785573                                                                                                                                                                                                                        │ test-preload-785573          │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ start   │ -p embed-certs-422591 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:56 UTC │
	│ delete  │ -p stopped-upgrade-416029                                                                                                                                                                                                                     │ stopped-upgrade-416029       │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ delete  │ -p disable-driver-mounts-719168                                                                                                                                                                                                               │ disable-driver-mounts-719168 │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ start   │ -p default-k8s-diff-port-500581 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:56 UTC │
	│ addons  │ enable metrics-server -p no-preload-950460 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │                     │
	│ stop    │ -p no-preload-950460 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:56 UTC │
	│ addons  │ enable dashboard -p no-preload-950460 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ start   │ -p no-preload-950460 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-422591 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	│ stop    │ -p embed-certs-422591 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-500581 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-500581 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	│ image   │ old-k8s-version-694122 image list --format=json                                                                                                                                                                                               │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ pause   │ -p old-k8s-version-694122 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 06:56:09
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 06:56:09.683208  252331 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:56:09.683522  252331 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:56:09.683533  252331 out.go:374] Setting ErrFile to fd 2...
	I1228 06:56:09.683539  252331 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:56:09.683817  252331 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:56:09.684408  252331 out.go:368] Setting JSON to false
	I1228 06:56:09.686138  252331 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2322,"bootTime":1766902648,"procs":376,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1228 06:56:09.686216  252331 start.go:143] virtualization: kvm guest
	I1228 06:56:09.688379  252331 out.go:179] * [no-preload-950460] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1228 06:56:09.689966  252331 notify.go:221] Checking for updates...
	I1228 06:56:09.690624  252331 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 06:56:09.691759  252331 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 06:56:09.693287  252331 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:56:09.694542  252331 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-5550/.minikube
	I1228 06:56:09.696489  252331 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1228 06:56:09.698353  252331 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 06:56:09.700204  252331 config.go:182] Loaded profile config "no-preload-950460": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:56:09.700981  252331 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 06:56:09.731534  252331 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1228 06:56:09.731673  252331 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:56:09.809872  252331 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-28 06:56:09.797345649 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:56:09.810012  252331 docker.go:319] overlay module found
	I1228 06:56:09.811872  252331 out.go:179] * Using the docker driver based on existing profile
	I1228 06:56:09.813113  252331 start.go:309] selected driver: docker
	I1228 06:56:09.813141  252331 start.go:928] validating driver "docker" against &{Name:no-preload-950460 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-950460 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:56:09.813261  252331 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 06:56:09.814183  252331 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:56:09.889225  252331 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-28 06:56:09.87743098 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:56:09.889583  252331 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 06:56:09.889616  252331 cni.go:84] Creating CNI manager for ""
	I1228 06:56:09.889688  252331 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1228 06:56:09.889728  252331 start.go:353] cluster config:
	{Name:no-preload-950460 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-950460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:56:09.892491  252331 out.go:179] * Starting "no-preload-950460" primary control-plane node in "no-preload-950460" cluster
	I1228 06:56:09.893559  252331 cache.go:134] Beginning downloading kic base image for docker with crio
	I1228 06:56:09.895822  252331 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 06:56:09.897246  252331 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1228 06:56:09.897378  252331 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/config.json ...
	I1228 06:56:09.897665  252331 cache.go:107] acquiring lock: {Name:mkd9176dc8bfe34090aff279f6f101ea6f0af9cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:56:09.897748  252331 cache.go:115] /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1228 06:56:09.897763  252331 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 110.737µs
	I1228 06:56:09.897776  252331 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1228 06:56:09.897792  252331 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 06:56:09.897921  252331 cache.go:107] acquiring lock: {Name:mk7d35a6d2b389149dcbeab5c7c2ffb31f57d65c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:56:09.898003  252331 cache.go:115] /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0 exists
	I1228 06:56:09.898018  252331 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0" -> "/home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0" took 105.145µs
	I1228 06:56:09.898051  252331 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0 -> /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0 succeeded
	I1228 06:56:09.898068  252331 cache.go:107] acquiring lock: {Name:mk242447cc3bf85a80c449b21152ddfbb942621c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:56:09.898065  252331 cache.go:107] acquiring lock: {Name:mke2c1949855d4a55e5668b0d2ae93b37c482c30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:56:09.898080  252331 cache.go:107] acquiring lock: {Name:mk532de4689e044277857a73866e5969a2e4fbc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:56:09.898114  252331 cache.go:115] /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0 exists
	I1228 06:56:09.898091  252331 cache.go:107] acquiring lock: {Name:mke47ac9c7c044600bef8f6b93ef0e26dc8302f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:56:09.898122  252331 cache.go:115] /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0 exists
	I1228 06:56:09.898122  252331 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0" -> "/home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0" took 56.777µs
	I1228 06:56:09.898131  252331 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0 -> /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0 succeeded
	I1228 06:56:09.898131  252331 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0" -> "/home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0" took 104.803µs
	I1228 06:56:09.898140  252331 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0 -> /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0 succeeded
	I1228 06:56:09.898147  252331 cache.go:107] acquiring lock: {Name:mk9e59e568752d1ca479b7f88a0993095cc4ab42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:56:09.898154  252331 cache.go:107] acquiring lock: {Name:mk4a1a601fb4bce5015f4152fc8c90f967d969a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:56:09.898175  252331 cache.go:115] /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 exists
	I1228 06:56:09.898185  252331 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0" took 104.327µs
	I1228 06:56:09.898197  252331 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1228 06:56:09.898201  252331 cache.go:115] /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0 exists
	I1228 06:56:09.898209  252331 cache.go:115] /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1228 06:56:09.898214  252331 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0" -> "/home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0" took 145.471µs
	I1228 06:56:09.898217  252331 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 65.787µs
	I1228 06:56:09.898225  252331 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0 -> /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0 succeeded
	I1228 06:56:09.898228  252331 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1228 06:56:09.898247  252331 cache.go:115] /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1228 06:56:09.898255  252331 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 110.483µs
	I1228 06:56:09.898263  252331 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1228 06:56:09.898271  252331 cache.go:87] Successfully saved all images to host disk.
	I1228 06:56:09.925389  252331 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 06:56:09.925420  252331 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
	I1228 06:56:09.925442  252331 cache.go:243] Successfully downloaded all kic artifacts
	I1228 06:56:09.925482  252331 start.go:360] acquireMachinesLock for no-preload-950460: {Name:mk62d7b73784bafca52412532a69147c30805a01 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:56:09.925562  252331 start.go:364] duration metric: took 47.499µs to acquireMachinesLock for "no-preload-950460"
	I1228 06:56:09.925594  252331 start.go:96] Skipping create...Using existing machine configuration
	I1228 06:56:09.925604  252331 fix.go:54] fixHost starting: 
	I1228 06:56:09.925883  252331 cli_runner.go:164] Run: docker container inspect no-preload-950460 --format={{.State.Status}}
	I1228 06:56:09.947427  252331 fix.go:112] recreateIfNeeded on no-preload-950460: state=Stopped err=<nil>
	W1228 06:56:09.947470  252331 fix.go:138] unexpected machine state, will restart: <nil>
	I1228 06:56:09.244143  243963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:09.744639  243963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:10.244325  243963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:10.364365  243963 kubeadm.go:1114] duration metric: took 4.219411016s to wait for elevateKubeSystemPrivileges
	I1228 06:56:10.364473  243963 kubeadm.go:403] duration metric: took 12.104828541s to StartCluster
	I1228 06:56:10.364513  243963 settings.go:142] acquiring lock: {Name:mk84c1d4c127eaf11c7cc5cc16de86f86f2dcbe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:10.364574  243963 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:56:10.367334  243963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/kubeconfig: {Name:mke0b45a06b272ee5aa493b62cdbe6f2a53c0aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:10.367689  243963 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1228 06:56:10.368151  243963 config.go:182] Loaded profile config "embed-certs-422591": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:56:10.368391  243963 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1228 06:56:10.368490  243963 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-422591"
	I1228 06:56:10.368509  243963 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-422591"
	I1228 06:56:10.368558  243963 host.go:66] Checking if "embed-certs-422591" exists ...
	I1228 06:56:10.369000  243963 cli_runner.go:164] Run: docker container inspect embed-certs-422591 --format={{.State.Status}}
	I1228 06:56:10.369135  243963 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1228 06:56:10.369221  243963 addons.go:70] Setting default-storageclass=true in profile "embed-certs-422591"
	I1228 06:56:10.369280  243963 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-422591"
	I1228 06:56:10.369857  243963 cli_runner.go:164] Run: docker container inspect embed-certs-422591 --format={{.State.Status}}
	I1228 06:56:10.370623  243963 out.go:179] * Verifying Kubernetes components...
	I1228 06:56:10.374484  243963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:56:10.403086  243963 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1228 06:56:07.752961  242715 pod_ready.go:104] pod "coredns-5dd5756b68-f75js" is not "Ready", error: <nil>
	W1228 06:56:09.756311  242715 pod_ready.go:104] pod "coredns-5dd5756b68-f75js" is not "Ready", error: <nil>
	I1228 06:56:10.405267  243963 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:56:10.405293  243963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1228 06:56:10.405355  243963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422591
	I1228 06:56:10.407121  243963 addons.go:239] Setting addon default-storageclass=true in "embed-certs-422591"
	I1228 06:56:10.407166  243963 host.go:66] Checking if "embed-certs-422591" exists ...
	I1228 06:56:10.408137  243963 cli_runner.go:164] Run: docker container inspect embed-certs-422591 --format={{.State.Status}}
	I1228 06:56:10.438924  243963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/embed-certs-422591/id_rsa Username:docker}
	I1228 06:56:10.442747  243963 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1228 06:56:10.442772  243963 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1228 06:56:10.442827  243963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422591
	I1228 06:56:10.477359  243963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/embed-certs-422591/id_rsa Username:docker}
	I1228 06:56:10.532358  243963 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1228 06:56:10.573979  243963 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:56:10.588218  243963 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:56:10.648019  243963 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1228 06:56:10.867869  243963 start.go:987] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1228 06:56:11.085832  243963 node_ready.go:35] waiting up to 6m0s for node "embed-certs-422591" to be "Ready" ...
	I1228 06:56:11.095783  243963 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1228 06:56:09.058672  247213 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1228 06:56:09.063442  247213 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1228 06:56:09.063466  247213 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1228 06:56:09.077870  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1228 06:56:09.407176  247213 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1228 06:56:09.407367  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:09.407468  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-500581 minikube.k8s.io/updated_at=2025_12_28T06_56_09_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba minikube.k8s.io/name=default-k8s-diff-port-500581 minikube.k8s.io/primary=true
	I1228 06:56:09.580457  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:09.580543  247213 ops.go:34] apiserver oom_adj: -16
	I1228 06:56:10.080579  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:10.581243  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:11.080638  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:11.581312  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:12.080705  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:12.580620  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:13.081161  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:13.581441  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:13.652690  247213 kubeadm.go:1114] duration metric: took 4.245373726s to wait for elevateKubeSystemPrivileges
	I1228 06:56:13.652726  247213 kubeadm.go:403] duration metric: took 12.364737655s to StartCluster
	I1228 06:56:13.652748  247213 settings.go:142] acquiring lock: {Name:mk84c1d4c127eaf11c7cc5cc16de86f86f2dcbe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:13.652812  247213 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:56:13.654909  247213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/kubeconfig: {Name:mke0b45a06b272ee5aa493b62cdbe6f2a53c0aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:13.655206  247213 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1228 06:56:13.655359  247213 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1228 06:56:13.655613  247213 config.go:182] Loaded profile config "default-k8s-diff-port-500581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:56:13.655657  247213 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1228 06:56:13.655720  247213 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-500581"
	I1228 06:56:13.655737  247213 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-500581"
	I1228 06:56:13.655761  247213 host.go:66] Checking if "default-k8s-diff-port-500581" exists ...
	I1228 06:56:13.656261  247213 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-500581"
	I1228 06:56:13.656283  247213 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-500581"
	I1228 06:56:13.656613  247213 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-500581 --format={{.State.Status}}
	I1228 06:56:13.657602  247213 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-500581 --format={{.State.Status}}
	I1228 06:56:13.660155  247213 out.go:179] * Verifying Kubernetes components...
	I1228 06:56:13.661579  247213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:56:13.684520  247213 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1228 06:56:11.097178  243963 addons.go:530] duration metric: took 728.781424ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1228 06:56:11.372202  243963 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-422591" context rescaled to 1 replicas
	W1228 06:56:13.088569  243963 node_ready.go:57] node "embed-certs-422591" has "Ready":"False" status (will retry)
	I1228 06:56:13.685585  247213 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:56:13.685607  247213 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1228 06:56:13.685662  247213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-500581
	I1228 06:56:13.686151  247213 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-500581"
	I1228 06:56:13.686203  247213 host.go:66] Checking if "default-k8s-diff-port-500581" exists ...
	I1228 06:56:13.686699  247213 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-500581 --format={{.State.Status}}
	I1228 06:56:13.718321  247213 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1228 06:56:13.718423  247213 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1228 06:56:13.718565  247213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-500581
	I1228 06:56:13.728024  247213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/default-k8s-diff-port-500581/id_rsa Username:docker}
	I1228 06:56:13.751115  247213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/default-k8s-diff-port-500581/id_rsa Username:docker}
	I1228 06:56:13.767540  247213 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1228 06:56:13.826652  247213 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:56:13.845102  247213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:56:13.860783  247213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1228 06:56:13.971728  247213 start.go:987] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1228 06:56:13.973616  247213 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-500581" to be "Ready" ...
	I1228 06:56:14.185139  247213 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1228 06:56:09.949330  252331 out.go:252] * Restarting existing docker container for "no-preload-950460" ...
	I1228 06:56:09.949409  252331 cli_runner.go:164] Run: docker start no-preload-950460
	I1228 06:56:10.304369  252331 cli_runner.go:164] Run: docker container inspect no-preload-950460 --format={{.State.Status}}
	I1228 06:56:10.333247  252331 kic.go:430] container "no-preload-950460" state is running.
	I1228 06:56:10.333791  252331 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-950460
	I1228 06:56:10.362343  252331 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/config.json ...
	I1228 06:56:10.362749  252331 machine.go:94] provisionDockerMachine start ...
	I1228 06:56:10.362898  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:10.399401  252331 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:10.400763  252331 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1228 06:56:10.400782  252331 main.go:144] libmachine: About to run SSH command:
	hostname
	I1228 06:56:10.401698  252331 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42784->127.0.0.1:33078: read: connection reset by peer
	I1228 06:56:13.530578  252331 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-950460
	
	I1228 06:56:13.530607  252331 ubuntu.go:182] provisioning hostname "no-preload-950460"
	I1228 06:56:13.530671  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:13.551523  252331 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:13.551766  252331 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1228 06:56:13.551782  252331 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-950460 && echo "no-preload-950460" | sudo tee /etc/hostname
	I1228 06:56:13.697078  252331 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-950460
	
	I1228 06:56:13.697213  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:13.734170  252331 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:13.734651  252331 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1228 06:56:13.734718  252331 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-950460' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-950460/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-950460' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1228 06:56:13.876570  252331 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1228 06:56:13.876646  252331 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22352-5550/.minikube CaCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22352-5550/.minikube}
	I1228 06:56:13.878995  252331 ubuntu.go:190] setting up certificates
	I1228 06:56:13.879017  252331 provision.go:84] configureAuth start
	I1228 06:56:13.879096  252331 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-950460
	I1228 06:56:13.902076  252331 provision.go:143] copyHostCerts
	I1228 06:56:13.902141  252331 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem, removing ...
	I1228 06:56:13.902162  252331 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem
	I1228 06:56:13.902253  252331 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem (1082 bytes)
	I1228 06:56:13.902388  252331 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem, removing ...
	I1228 06:56:13.902401  252331 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem
	I1228 06:56:13.902438  252331 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem (1123 bytes)
	I1228 06:56:13.902511  252331 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem, removing ...
	I1228 06:56:13.902520  252331 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem
	I1228 06:56:13.902560  252331 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem (1679 bytes)
	I1228 06:56:13.902624  252331 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem org=jenkins.no-preload-950460 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-950460]
	I1228 06:56:14.048352  252331 provision.go:177] copyRemoteCerts
	I1228 06:56:14.048419  252331 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1228 06:56:14.048452  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:14.068611  252331 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/no-preload-950460/id_rsa Username:docker}
	I1228 06:56:14.168261  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1228 06:56:14.190018  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1228 06:56:14.208765  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1228 06:56:14.226610  252331 provision.go:87] duration metric: took 347.581995ms to configureAuth
	I1228 06:56:14.226635  252331 ubuntu.go:206] setting minikube options for container-runtime
	I1228 06:56:14.226812  252331 config.go:182] Loaded profile config "no-preload-950460": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:56:14.226900  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:14.244598  252331 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:14.244866  252331 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1228 06:56:14.244892  252331 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1228 06:56:12.253209  242715 pod_ready.go:104] pod "coredns-5dd5756b68-f75js" is not "Ready", error: <nil>
	W1228 06:56:14.796990  242715 pod_ready.go:104] pod "coredns-5dd5756b68-f75js" is not "Ready", error: <nil>
	I1228 06:56:15.100866  252331 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1228 06:56:15.100892  252331 machine.go:97] duration metric: took 4.738124144s to provisionDockerMachine
	I1228 06:56:15.100904  252331 start.go:293] postStartSetup for "no-preload-950460" (driver="docker")
	I1228 06:56:15.100918  252331 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1228 06:56:15.101012  252331 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1228 06:56:15.101073  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:15.125860  252331 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/no-preload-950460/id_rsa Username:docker}
	I1228 06:56:15.230154  252331 ssh_runner.go:195] Run: cat /etc/os-release
	I1228 06:56:15.234858  252331 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1228 06:56:15.234891  252331 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1228 06:56:15.234905  252331 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-5550/.minikube/addons for local assets ...
	I1228 06:56:15.234956  252331 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-5550/.minikube/files for local assets ...
	I1228 06:56:15.235108  252331 filesync.go:149] local asset: /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem -> 90762.pem in /etc/ssl/certs
	I1228 06:56:15.235252  252331 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1228 06:56:15.245155  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem --> /etc/ssl/certs/90762.pem (1708 bytes)
	I1228 06:56:15.268602  252331 start.go:296] duration metric: took 167.682246ms for postStartSetup
	I1228 06:56:15.268700  252331 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 06:56:15.268759  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:15.288607  252331 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/no-preload-950460/id_rsa Username:docker}
	I1228 06:56:15.381324  252331 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1228 06:56:15.386166  252331 fix.go:56] duration metric: took 5.460557205s for fixHost
	I1228 06:56:15.386193  252331 start.go:83] releasing machines lock for "no-preload-950460", held for 5.460617152s
	I1228 06:56:15.386267  252331 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-950460
	I1228 06:56:15.405738  252331 ssh_runner.go:195] Run: cat /version.json
	I1228 06:56:15.405806  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:15.405845  252331 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1228 06:56:15.405936  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:15.426086  252331 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/no-preload-950460/id_rsa Username:docker}
	I1228 06:56:15.426572  252331 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/no-preload-950460/id_rsa Username:docker}
	I1228 06:56:15.573340  252331 ssh_runner.go:195] Run: systemctl --version
	I1228 06:56:15.580022  252331 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1228 06:56:15.614860  252331 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1228 06:56:15.619799  252331 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1228 06:56:15.619859  252331 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1228 06:56:15.627841  252331 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1228 06:56:15.627863  252331 start.go:496] detecting cgroup driver to use...
	I1228 06:56:15.627897  252331 detect.go:190] detected "systemd" cgroup driver on host os
	I1228 06:56:15.627935  252331 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1228 06:56:15.643627  252331 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1228 06:56:15.656486  252331 docker.go:218] disabling cri-docker service (if available) ...
	I1228 06:56:15.656542  252331 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1228 06:56:15.670796  252331 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1228 06:56:15.683099  252331 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1228 06:56:15.763732  252331 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1228 06:56:15.846193  252331 docker.go:234] disabling docker service ...
	I1228 06:56:15.846248  252331 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1228 06:56:15.860365  252331 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1228 06:56:15.872316  252331 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1228 06:56:15.952498  252331 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1228 06:56:16.036768  252331 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1228 06:56:16.048883  252331 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 06:56:16.062667  252331 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1228 06:56:16.062719  252331 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:16.072039  252331 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1228 06:56:16.072100  252331 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:16.080521  252331 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:16.089148  252331 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:16.097405  252331 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1228 06:56:16.105158  252331 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:16.113413  252331 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:16.122659  252331 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:16.131327  252331 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1228 06:56:16.138849  252331 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1228 06:56:16.145687  252331 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:56:16.222679  252331 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1228 06:56:16.520445  252331 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1228 06:56:16.520595  252331 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1228 06:56:16.524711  252331 start.go:574] Will wait 60s for crictl version
	I1228 06:56:16.524766  252331 ssh_runner.go:195] Run: which crictl
	I1228 06:56:16.528189  252331 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1228 06:56:16.553043  252331 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1228 06:56:16.553151  252331 ssh_runner.go:195] Run: crio --version
	I1228 06:56:16.580248  252331 ssh_runner.go:195] Run: crio --version
	I1228 06:56:16.608534  252331 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1228 06:56:14.186403  247213 addons.go:530] duration metric: took 530.739381ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1228 06:56:14.479845  247213 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-500581" context rescaled to 1 replicas
	W1228 06:56:15.976454  247213 node_ready.go:57] node "default-k8s-diff-port-500581" has "Ready":"False" status (will retry)
	I1228 06:56:16.609592  252331 cli_runner.go:164] Run: docker network inspect no-preload-950460 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 06:56:16.626775  252331 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1228 06:56:16.630900  252331 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 06:56:16.641409  252331 kubeadm.go:884] updating cluster {Name:no-preload-950460 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-950460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 06:56:16.641518  252331 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1228 06:56:16.641556  252331 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 06:56:16.675102  252331 crio.go:631] all images are preloaded for cri-o runtime.
	I1228 06:56:16.675123  252331 cache_images.go:86] Images are preloaded, skipping loading
	I1228 06:56:16.675129  252331 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0 crio true true} ...
	I1228 06:56:16.675244  252331 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-950460 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:no-preload-950460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 06:56:16.675331  252331 ssh_runner.go:195] Run: crio config
	I1228 06:56:16.718702  252331 cni.go:84] Creating CNI manager for ""
	I1228 06:56:16.718733  252331 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1228 06:56:16.718752  252331 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1228 06:56:16.718789  252331 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-950460 NodeName:no-preload-950460 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 06:56:16.718988  252331 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-950460"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 06:56:16.719070  252331 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1228 06:56:16.727836  252331 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 06:56:16.727925  252331 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 06:56:16.735688  252331 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1228 06:56:16.748533  252331 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 06:56:16.761180  252331 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1228 06:56:16.774346  252331 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1228 06:56:16.777963  252331 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 06:56:16.787778  252331 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:56:16.870258  252331 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:56:16.897229  252331 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460 for IP: 192.168.94.2
	I1228 06:56:16.897252  252331 certs.go:195] generating shared ca certs ...
	I1228 06:56:16.897273  252331 certs.go:227] acquiring lock for ca certs: {Name:mk77ee411d20e2d367f536371cb4debf1ce5f664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:16.897417  252331 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-5550/.minikube/ca.key
	I1228 06:56:16.897469  252331 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.key
	I1228 06:56:16.897483  252331 certs.go:257] generating profile certs ...
	I1228 06:56:16.897565  252331 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/client.key
	I1228 06:56:16.897621  252331 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/apiserver.key.3468f947
	I1228 06:56:16.897659  252331 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/proxy-client.key
	I1228 06:56:16.897752  252331 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076.pem (1338 bytes)
	W1228 06:56:16.897786  252331 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076_empty.pem, impossibly tiny 0 bytes
	I1228 06:56:16.897800  252331 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem (1675 bytes)
	I1228 06:56:16.897832  252331 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem (1082 bytes)
	I1228 06:56:16.897861  252331 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem (1123 bytes)
	I1228 06:56:16.897894  252331 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem (1679 bytes)
	I1228 06:56:16.897943  252331 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem (1708 bytes)
	I1228 06:56:16.898713  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 06:56:16.917010  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1228 06:56:16.936367  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 06:56:16.957237  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 06:56:16.980495  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1228 06:56:16.998372  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1228 06:56:17.015059  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 06:56:17.031891  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1228 06:56:17.049280  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem --> /usr/share/ca-certificates/90762.pem (1708 bytes)
	I1228 06:56:17.065663  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 06:56:17.082832  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076.pem --> /usr/share/ca-certificates/9076.pem (1338 bytes)
	I1228 06:56:17.100902  252331 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 06:56:17.113166  252331 ssh_runner.go:195] Run: openssl version
	I1228 06:56:17.119103  252331 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:17.126689  252331 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 06:56:17.134233  252331 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:17.137970  252331 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:28 /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:17.138010  252331 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:17.174376  252331 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 06:56:17.182094  252331 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9076.pem
	I1228 06:56:17.189546  252331 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9076.pem /etc/ssl/certs/9076.pem
	I1228 06:56:17.196673  252331 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9076.pem
	I1228 06:56:17.200312  252331 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:31 /usr/share/ca-certificates/9076.pem
	I1228 06:56:17.200355  252331 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9076.pem
	I1228 06:56:17.235404  252331 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 06:56:17.243056  252331 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/90762.pem
	I1228 06:56:17.251423  252331 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/90762.pem /etc/ssl/certs/90762.pem
	I1228 06:56:17.259118  252331 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/90762.pem
	I1228 06:56:17.262689  252331 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:31 /usr/share/ca-certificates/90762.pem
	I1228 06:56:17.262740  252331 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/90762.pem
	I1228 06:56:17.298353  252331 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 06:56:17.306420  252331 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 06:56:17.310366  252331 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1228 06:56:17.344608  252331 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1228 06:56:17.380698  252331 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1228 06:56:17.426014  252331 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1228 06:56:17.474223  252331 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1228 06:56:17.531854  252331 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1228 06:56:17.577281  252331 kubeadm.go:401] StartCluster: {Name:no-preload-950460 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-950460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:56:17.577434  252331 ssh_runner.go:195] Run: sudo crio config
	I1228 06:56:17.636151  252331 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	W1228 06:56:17.648977  252331 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:17Z" level=error msg="open /run/runc: no such file or directory"
	I1228 06:56:17.649067  252331 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 06:56:17.657728  252331 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1228 06:56:17.657748  252331 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1228 06:56:17.657796  252331 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1228 06:56:17.666778  252331 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1228 06:56:17.668081  252331 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-950460" does not appear in /home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:56:17.668996  252331 kubeconfig.go:62] /home/jenkins/minikube-integration/22352-5550/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-950460" cluster setting kubeconfig missing "no-preload-950460" context setting]
	I1228 06:56:17.670453  252331 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/kubeconfig: {Name:mke0b45a06b272ee5aa493b62cdbe6f2a53c0aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:17.672683  252331 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1228 06:56:17.683544  252331 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1228 06:56:17.683585  252331 kubeadm.go:602] duration metric: took 25.829752ms to restartPrimaryControlPlane
	I1228 06:56:17.683596  252331 kubeadm.go:403] duration metric: took 106.327386ms to StartCluster
	I1228 06:56:17.683615  252331 settings.go:142] acquiring lock: {Name:mk84c1d4c127eaf11c7cc5cc16de86f86f2dcbe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:17.683665  252331 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:56:17.686260  252331 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/kubeconfig: {Name:mke0b45a06b272ee5aa493b62cdbe6f2a53c0aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:17.686556  252331 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1228 06:56:17.686676  252331 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1228 06:56:17.686779  252331 addons.go:70] Setting storage-provisioner=true in profile "no-preload-950460"
	I1228 06:56:17.686790  252331 config.go:182] Loaded profile config "no-preload-950460": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:56:17.686794  252331 addons.go:239] Setting addon storage-provisioner=true in "no-preload-950460"
	W1228 06:56:17.686802  252331 addons.go:248] addon storage-provisioner should already be in state true
	I1228 06:56:17.686829  252331 host.go:66] Checking if "no-preload-950460" exists ...
	I1228 06:56:17.686834  252331 addons.go:70] Setting default-storageclass=true in profile "no-preload-950460"
	I1228 06:56:17.686838  252331 addons.go:70] Setting dashboard=true in profile "no-preload-950460"
	I1228 06:56:17.686865  252331 addons.go:239] Setting addon dashboard=true in "no-preload-950460"
	W1228 06:56:17.686879  252331 addons.go:248] addon dashboard should already be in state true
	I1228 06:56:17.686912  252331 host.go:66] Checking if "no-preload-950460" exists ...
	I1228 06:56:17.686847  252331 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-950460"
	I1228 06:56:17.687329  252331 cli_runner.go:164] Run: docker container inspect no-preload-950460 --format={{.State.Status}}
	I1228 06:56:17.687415  252331 cli_runner.go:164] Run: docker container inspect no-preload-950460 --format={{.State.Status}}
	I1228 06:56:17.687330  252331 cli_runner.go:164] Run: docker container inspect no-preload-950460 --format={{.State.Status}}
	I1228 06:56:17.689184  252331 out.go:179] * Verifying Kubernetes components...
	I1228 06:56:17.690310  252331 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:56:17.712805  252331 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1228 06:56:17.713229  252331 addons.go:239] Setting addon default-storageclass=true in "no-preload-950460"
	W1228 06:56:17.713248  252331 addons.go:248] addon default-storageclass should already be in state true
	I1228 06:56:17.713270  252331 host.go:66] Checking if "no-preload-950460" exists ...
	I1228 06:56:17.713562  252331 cli_runner.go:164] Run: docker container inspect no-preload-950460 --format={{.State.Status}}
	I1228 06:56:17.713731  252331 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1228 06:56:17.713774  252331 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:56:17.713791  252331 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1228 06:56:17.713835  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:17.715782  252331 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1228 06:56:15.089728  243963 node_ready.go:57] node "embed-certs-422591" has "Ready":"False" status (will retry)
	W1228 06:56:17.589238  243963 node_ready.go:57] node "embed-certs-422591" has "Ready":"False" status (will retry)
	I1228 06:56:17.716776  252331 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1228 06:56:17.716793  252331 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1228 06:56:17.716846  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:17.737306  252331 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1228 06:56:17.737329  252331 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1228 06:56:17.737387  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:17.747296  252331 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/no-preload-950460/id_rsa Username:docker}
	I1228 06:56:17.752550  252331 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/no-preload-950460/id_rsa Username:docker}
	I1228 06:56:17.763145  252331 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/no-preload-950460/id_rsa Username:docker}
	I1228 06:56:17.827637  252331 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:56:17.841176  252331 node_ready.go:35] waiting up to 6m0s for node "no-preload-950460" to be "Ready" ...
	I1228 06:56:17.852679  252331 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:56:17.859387  252331 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1228 06:56:17.859413  252331 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1228 06:56:17.870358  252331 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1228 06:56:17.876579  252331 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1228 06:56:17.876626  252331 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1228 06:56:17.892110  252331 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1228 06:56:17.892137  252331 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1228 06:56:17.907110  252331 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1228 06:56:17.907153  252331 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1228 06:56:17.921175  252331 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1228 06:56:17.921199  252331 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1228 06:56:17.934592  252331 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1228 06:56:17.934610  252331 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1228 06:56:17.946620  252331 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1228 06:56:17.946645  252331 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1228 06:56:17.958616  252331 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1228 06:56:17.958637  252331 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1228 06:56:17.971511  252331 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 06:56:17.971531  252331 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1228 06:56:17.984466  252331 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 06:56:19.111197  252331 node_ready.go:49] node "no-preload-950460" is "Ready"
	I1228 06:56:19.111234  252331 node_ready.go:38] duration metric: took 1.270013468s for node "no-preload-950460" to be "Ready" ...
	I1228 06:56:19.111250  252331 api_server.go:52] waiting for apiserver process to appear ...
	I1228 06:56:19.111303  252331 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 06:56:19.644061  252331 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.791326834s)
	I1228 06:56:19.644127  252331 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.773734972s)
	I1228 06:56:19.644217  252331 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.659719643s)
	I1228 06:56:19.644238  252331 api_server.go:72] duration metric: took 1.957648252s to wait for apiserver process to appear ...
	I1228 06:56:19.644247  252331 api_server.go:88] waiting for apiserver healthz status ...
	I1228 06:56:19.644265  252331 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1228 06:56:19.646079  252331 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-950460 addons enable metrics-server
	
	I1228 06:56:19.648689  252331 api_server.go:325] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1228 06:56:19.648710  252331 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1228 06:56:19.652919  252331 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1228 06:56:19.654055  252331 addons.go:530] duration metric: took 1.967385599s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	W1228 06:56:17.252978  242715 pod_ready.go:104] pod "coredns-5dd5756b68-f75js" is not "Ready", error: <nil>
	W1228 06:56:19.752632  242715 pod_ready.go:104] pod "coredns-5dd5756b68-f75js" is not "Ready", error: <nil>
	W1228 06:56:17.976710  247213 node_ready.go:57] node "default-k8s-diff-port-500581" has "Ready":"False" status (will retry)
	W1228 06:56:20.476521  247213 node_ready.go:57] node "default-k8s-diff-port-500581" has "Ready":"False" status (will retry)
	W1228 06:56:20.089066  243963 node_ready.go:57] node "embed-certs-422591" has "Ready":"False" status (will retry)
	W1228 06:56:22.089199  243963 node_ready.go:57] node "embed-certs-422591" has "Ready":"False" status (will retry)
	I1228 06:56:23.089137  243963 node_ready.go:49] node "embed-certs-422591" is "Ready"
	I1228 06:56:23.089171  243963 node_ready.go:38] duration metric: took 12.00330569s for node "embed-certs-422591" to be "Ready" ...
	I1228 06:56:23.089188  243963 api_server.go:52] waiting for apiserver process to appear ...
	I1228 06:56:23.089247  243963 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 06:56:23.109640  243963 api_server.go:72] duration metric: took 12.740459175s to wait for apiserver process to appear ...
	I1228 06:56:23.109670  243963 api_server.go:88] waiting for apiserver healthz status ...
	I1228 06:56:23.109691  243963 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1228 06:56:23.115347  243963 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1228 06:56:23.116388  243963 api_server.go:141] control plane version: v1.35.0
	I1228 06:56:23.116413  243963 api_server.go:131] duration metric: took 6.736322ms to wait for apiserver health ...
	I1228 06:56:23.116422  243963 system_pods.go:43] waiting for kube-system pods to appear ...
	I1228 06:56:23.120151  243963 system_pods.go:59] 8 kube-system pods found
	I1228 06:56:23.120183  243963 system_pods.go:61] "coredns-7d764666f9-dmhdv" [73a84260-cf19-47c9-a23e-616f99cb5f38] Pending
	I1228 06:56:23.120191  243963 system_pods.go:61] "etcd-embed-certs-422591" [fa26dd24-514f-4ada-b710-7e65e52ccd9e] Running
	I1228 06:56:23.120197  243963 system_pods.go:61] "kindnet-9zxtp" [e5bd678d-7d09-455f-993e-2fb6a4f02111] Running
	I1228 06:56:23.120217  243963 system_pods.go:61] "kube-apiserver-embed-certs-422591" [935ffe6b-7ba7-4580-b22b-b76cd5469ae8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 06:56:23.120229  243963 system_pods.go:61] "kube-controller-manager-embed-certs-422591" [39f6d823-b0f9-44c0-be26-47e85a60ca63] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 06:56:23.120236  243963 system_pods.go:61] "kube-proxy-j2dkd" [f64a0ce0-4a8f-4d7d-aac7-cce99ec2bdd8] Running
	I1228 06:56:23.120242  243963 system_pods.go:61] "kube-scheduler-embed-certs-422591" [ba6d5c13-ea17-4f1d-bcc9-c57e4e7ce995] Running
	I1228 06:56:23.120247  243963 system_pods.go:61] "storage-provisioner" [ac0163fe-8dd0-4650-a401-22a9a9310b5e] Pending
	I1228 06:56:23.120255  243963 system_pods.go:74] duration metric: took 3.827732ms to wait for pod list to return data ...
	I1228 06:56:23.120267  243963 default_sa.go:34] waiting for default service account to be created ...
	I1228 06:56:23.122455  243963 default_sa.go:45] found service account: "default"
	I1228 06:56:23.122484  243963 default_sa.go:55] duration metric: took 2.209324ms for default service account to be created ...
	I1228 06:56:23.122495  243963 system_pods.go:116] waiting for k8s-apps to be running ...
	I1228 06:56:23.125732  243963 system_pods.go:86] 8 kube-system pods found
	I1228 06:56:23.125761  243963 system_pods.go:89] "coredns-7d764666f9-dmhdv" [73a84260-cf19-47c9-a23e-616f99cb5f38] Pending
	I1228 06:56:23.125768  243963 system_pods.go:89] "etcd-embed-certs-422591" [fa26dd24-514f-4ada-b710-7e65e52ccd9e] Running
	I1228 06:56:23.125774  243963 system_pods.go:89] "kindnet-9zxtp" [e5bd678d-7d09-455f-993e-2fb6a4f02111] Running
	I1228 06:56:23.125782  243963 system_pods.go:89] "kube-apiserver-embed-certs-422591" [935ffe6b-7ba7-4580-b22b-b76cd5469ae8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 06:56:23.125798  243963 system_pods.go:89] "kube-controller-manager-embed-certs-422591" [39f6d823-b0f9-44c0-be26-47e85a60ca63] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 06:56:23.125806  243963 system_pods.go:89] "kube-proxy-j2dkd" [f64a0ce0-4a8f-4d7d-aac7-cce99ec2bdd8] Running
	I1228 06:56:23.125812  243963 system_pods.go:89] "kube-scheduler-embed-certs-422591" [ba6d5c13-ea17-4f1d-bcc9-c57e4e7ce995] Running
	I1228 06:56:23.125821  243963 system_pods.go:89] "storage-provisioner" [ac0163fe-8dd0-4650-a401-22a9a9310b5e] Pending
	I1228 06:56:23.125858  243963 retry.go:84] will retry after 300ms: missing components: kube-dns
	I1228 06:56:23.380969  243963 system_pods.go:86] 8 kube-system pods found
	I1228 06:56:23.381005  243963 system_pods.go:89] "coredns-7d764666f9-dmhdv" [73a84260-cf19-47c9-a23e-616f99cb5f38] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:56:23.381014  243963 system_pods.go:89] "etcd-embed-certs-422591" [fa26dd24-514f-4ada-b710-7e65e52ccd9e] Running
	I1228 06:56:23.381023  243963 system_pods.go:89] "kindnet-9zxtp" [e5bd678d-7d09-455f-993e-2fb6a4f02111] Running
	I1228 06:56:23.381042  243963 system_pods.go:89] "kube-apiserver-embed-certs-422591" [935ffe6b-7ba7-4580-b22b-b76cd5469ae8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 06:56:23.381051  243963 system_pods.go:89] "kube-controller-manager-embed-certs-422591" [39f6d823-b0f9-44c0-be26-47e85a60ca63] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 06:56:23.381057  243963 system_pods.go:89] "kube-proxy-j2dkd" [f64a0ce0-4a8f-4d7d-aac7-cce99ec2bdd8] Running
	I1228 06:56:23.381067  243963 system_pods.go:89] "kube-scheduler-embed-certs-422591" [ba6d5c13-ea17-4f1d-bcc9-c57e4e7ce995] Running
	I1228 06:56:23.381075  243963 system_pods.go:89] "storage-provisioner" [ac0163fe-8dd0-4650-a401-22a9a9310b5e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:56:23.736873  243963 system_pods.go:86] 8 kube-system pods found
	I1228 06:56:23.736924  243963 system_pods.go:89] "coredns-7d764666f9-dmhdv" [73a84260-cf19-47c9-a23e-616f99cb5f38] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:56:23.736933  243963 system_pods.go:89] "etcd-embed-certs-422591" [fa26dd24-514f-4ada-b710-7e65e52ccd9e] Running
	I1228 06:56:23.736942  243963 system_pods.go:89] "kindnet-9zxtp" [e5bd678d-7d09-455f-993e-2fb6a4f02111] Running
	I1228 06:56:23.736955  243963 system_pods.go:89] "kube-apiserver-embed-certs-422591" [935ffe6b-7ba7-4580-b22b-b76cd5469ae8] Running
	I1228 06:56:23.736965  243963 system_pods.go:89] "kube-controller-manager-embed-certs-422591" [39f6d823-b0f9-44c0-be26-47e85a60ca63] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 06:56:23.736971  243963 system_pods.go:89] "kube-proxy-j2dkd" [f64a0ce0-4a8f-4d7d-aac7-cce99ec2bdd8] Running
	I1228 06:56:23.736990  243963 system_pods.go:89] "kube-scheduler-embed-certs-422591" [ba6d5c13-ea17-4f1d-bcc9-c57e4e7ce995] Running
	I1228 06:56:23.737002  243963 system_pods.go:89] "storage-provisioner" [ac0163fe-8dd0-4650-a401-22a9a9310b5e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:56:24.078656  243963 system_pods.go:86] 8 kube-system pods found
	I1228 06:56:24.078690  243963 system_pods.go:89] "coredns-7d764666f9-dmhdv" [73a84260-cf19-47c9-a23e-616f99cb5f38] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:56:24.078696  243963 system_pods.go:89] "etcd-embed-certs-422591" [fa26dd24-514f-4ada-b710-7e65e52ccd9e] Running
	I1228 06:56:24.078700  243963 system_pods.go:89] "kindnet-9zxtp" [e5bd678d-7d09-455f-993e-2fb6a4f02111] Running
	I1228 06:56:24.078704  243963 system_pods.go:89] "kube-apiserver-embed-certs-422591" [935ffe6b-7ba7-4580-b22b-b76cd5469ae8] Running
	I1228 06:56:24.078709  243963 system_pods.go:89] "kube-controller-manager-embed-certs-422591" [39f6d823-b0f9-44c0-be26-47e85a60ca63] Running
	I1228 06:56:24.078712  243963 system_pods.go:89] "kube-proxy-j2dkd" [f64a0ce0-4a8f-4d7d-aac7-cce99ec2bdd8] Running
	I1228 06:56:24.078715  243963 system_pods.go:89] "kube-scheduler-embed-certs-422591" [ba6d5c13-ea17-4f1d-bcc9-c57e4e7ce995] Running
	I1228 06:56:24.078721  243963 system_pods.go:89] "storage-provisioner" [ac0163fe-8dd0-4650-a401-22a9a9310b5e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:56:20.144322  252331 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1228 06:56:20.148700  252331 api_server.go:325] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1228 06:56:20.148728  252331 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1228 06:56:20.644327  252331 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1228 06:56:20.648377  252331 api_server.go:325] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1228 06:56:20.649429  252331 api_server.go:141] control plane version: v1.35.0
	I1228 06:56:20.649449  252331 api_server.go:131] duration metric: took 1.005195846s to wait for apiserver health ...
	I1228 06:56:20.649458  252331 system_pods.go:43] waiting for kube-system pods to appear ...
	I1228 06:56:20.652593  252331 system_pods.go:59] 8 kube-system pods found
	I1228 06:56:20.652630  252331 system_pods.go:61] "coredns-7d764666f9-npk6g" [a3cc436b-e460-483e-99aa-f7d44599d666] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:56:20.652637  252331 system_pods.go:61] "etcd-no-preload-950460" [61fd908c-4329-4432-82b2-80206bbbb703] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 06:56:20.652644  252331 system_pods.go:61] "kindnet-xhb7x" [4bab0d9b-3499-4546-bb8c-e47bfc17dbbf] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 06:56:20.652653  252331 system_pods.go:61] "kube-apiserver-no-preload-950460" [2aeafb60-9003-44c3-b5cb-960dd4a668c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 06:56:20.652667  252331 system_pods.go:61] "kube-controller-manager-no-preload-950460" [b38f2ea3-71b8-45e0-9c27-eb7fddfc67a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 06:56:20.652675  252331 system_pods.go:61] "kube-proxy-294rn" [c88bb406-588c-45ec-9225-946af7327ec0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 06:56:20.652686  252331 system_pods.go:61] "kube-scheduler-no-preload-950460" [24b95531-e1d2-47ff-abd3-70d0cdab9fe4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 06:56:20.652694  252331 system_pods.go:61] "storage-provisioner" [a4076523-c034-4331-8dd7-a506e9dec2d9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:56:20.652703  252331 system_pods.go:74] duration metric: took 3.239436ms to wait for pod list to return data ...
	I1228 06:56:20.652715  252331 default_sa.go:34] waiting for default service account to be created ...
	I1228 06:56:20.654840  252331 default_sa.go:45] found service account: "default"
	I1228 06:56:20.654856  252331 default_sa.go:55] duration metric: took 2.135398ms for default service account to be created ...
	I1228 06:56:20.654863  252331 system_pods.go:116] waiting for k8s-apps to be running ...
	I1228 06:56:20.656911  252331 system_pods.go:86] 8 kube-system pods found
	I1228 06:56:20.656935  252331 system_pods.go:89] "coredns-7d764666f9-npk6g" [a3cc436b-e460-483e-99aa-f7d44599d666] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:56:20.656943  252331 system_pods.go:89] "etcd-no-preload-950460" [61fd908c-4329-4432-82b2-80206bbbb703] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 06:56:20.656950  252331 system_pods.go:89] "kindnet-xhb7x" [4bab0d9b-3499-4546-bb8c-e47bfc17dbbf] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 06:56:20.656955  252331 system_pods.go:89] "kube-apiserver-no-preload-950460" [2aeafb60-9003-44c3-b5cb-960dd4a668c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 06:56:20.656961  252331 system_pods.go:89] "kube-controller-manager-no-preload-950460" [b38f2ea3-71b8-45e0-9c27-eb7fddfc67a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 06:56:20.656969  252331 system_pods.go:89] "kube-proxy-294rn" [c88bb406-588c-45ec-9225-946af7327ec0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 06:56:20.656974  252331 system_pods.go:89] "kube-scheduler-no-preload-950460" [24b95531-e1d2-47ff-abd3-70d0cdab9fe4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 06:56:20.656979  252331 system_pods.go:89] "storage-provisioner" [a4076523-c034-4331-8dd7-a506e9dec2d9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:56:20.656988  252331 system_pods.go:126] duration metric: took 2.120486ms to wait for k8s-apps to be running ...
	I1228 06:56:20.656995  252331 system_svc.go:44] waiting for kubelet service to be running ....
	I1228 06:56:20.657051  252331 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:56:20.671024  252331 system_svc.go:56] duration metric: took 14.023192ms WaitForService to wait for kubelet
	I1228 06:56:20.671072  252331 kubeadm.go:587] duration metric: took 2.984480725s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 06:56:20.671093  252331 node_conditions.go:102] verifying NodePressure condition ...
	I1228 06:56:20.673706  252331 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1228 06:56:20.673727  252331 node_conditions.go:123] node cpu capacity is 8
	I1228 06:56:20.673740  252331 node_conditions.go:105] duration metric: took 2.643602ms to run NodePressure ...
	I1228 06:56:20.673752  252331 start.go:242] waiting for startup goroutines ...
	I1228 06:56:20.673758  252331 start.go:247] waiting for cluster config update ...
	I1228 06:56:20.673773  252331 start.go:256] writing updated cluster config ...
	I1228 06:56:20.674067  252331 ssh_runner.go:195] Run: rm -f paused
	I1228 06:56:20.677778  252331 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 06:56:20.681121  252331 pod_ready.go:83] waiting for pod "coredns-7d764666f9-npk6g" in "kube-system" namespace to be "Ready" or be gone ...
	W1228 06:56:22.686104  252331 pod_ready.go:104] pod "coredns-7d764666f9-npk6g" is not "Ready", error: <nil>
	W1228 06:56:22.251764  242715 pod_ready.go:104] pod "coredns-5dd5756b68-f75js" is not "Ready", error: <nil>
	W1228 06:56:24.253072  242715 pod_ready.go:104] pod "coredns-5dd5756b68-f75js" is not "Ready", error: <nil>
	I1228 06:56:24.497471  243963 system_pods.go:86] 8 kube-system pods found
	I1228 06:56:24.497502  243963 system_pods.go:89] "coredns-7d764666f9-dmhdv" [73a84260-cf19-47c9-a23e-616f99cb5f38] Running
	I1228 06:56:24.497510  243963 system_pods.go:89] "etcd-embed-certs-422591" [fa26dd24-514f-4ada-b710-7e65e52ccd9e] Running
	I1228 06:56:24.497516  243963 system_pods.go:89] "kindnet-9zxtp" [e5bd678d-7d09-455f-993e-2fb6a4f02111] Running
	I1228 06:56:24.497521  243963 system_pods.go:89] "kube-apiserver-embed-certs-422591" [935ffe6b-7ba7-4580-b22b-b76cd5469ae8] Running
	I1228 06:56:24.497528  243963 system_pods.go:89] "kube-controller-manager-embed-certs-422591" [39f6d823-b0f9-44c0-be26-47e85a60ca63] Running
	I1228 06:56:24.497533  243963 system_pods.go:89] "kube-proxy-j2dkd" [f64a0ce0-4a8f-4d7d-aac7-cce99ec2bdd8] Running
	I1228 06:56:24.497539  243963 system_pods.go:89] "kube-scheduler-embed-certs-422591" [ba6d5c13-ea17-4f1d-bcc9-c57e4e7ce995] Running
	I1228 06:56:24.497545  243963 system_pods.go:89] "storage-provisioner" [ac0163fe-8dd0-4650-a401-22a9a9310b5e] Running
	I1228 06:56:24.497556  243963 system_pods.go:126] duration metric: took 1.375053604s to wait for k8s-apps to be running ...
	I1228 06:56:24.497578  243963 system_svc.go:44] waiting for kubelet service to be running ....
	I1228 06:56:24.497628  243963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:56:24.514567  243963 system_svc.go:56] duration metric: took 16.979492ms WaitForService to wait for kubelet
	I1228 06:56:24.514605  243963 kubeadm.go:587] duration metric: took 14.145429952s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 06:56:24.514629  243963 node_conditions.go:102] verifying NodePressure condition ...
	I1228 06:56:24.518108  243963 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1228 06:56:24.518140  243963 node_conditions.go:123] node cpu capacity is 8
	I1228 06:56:24.518158  243963 node_conditions.go:105] duration metric: took 3.522325ms to run NodePressure ...
	I1228 06:56:24.518177  243963 start.go:242] waiting for startup goroutines ...
	I1228 06:56:24.518186  243963 start.go:247] waiting for cluster config update ...
	I1228 06:56:24.518200  243963 start.go:256] writing updated cluster config ...
	I1228 06:56:24.518505  243963 ssh_runner.go:195] Run: rm -f paused
	I1228 06:56:24.523480  243963 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 06:56:24.528339  243963 pod_ready.go:83] waiting for pod "coredns-7d764666f9-dmhdv" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:24.533314  243963 pod_ready.go:94] pod "coredns-7d764666f9-dmhdv" is "Ready"
	I1228 06:56:24.533340  243963 pod_ready.go:86] duration metric: took 4.973959ms for pod "coredns-7d764666f9-dmhdv" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:24.535652  243963 pod_ready.go:83] waiting for pod "etcd-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:24.540088  243963 pod_ready.go:94] pod "etcd-embed-certs-422591" is "Ready"
	I1228 06:56:24.540118  243963 pod_ready.go:86] duration metric: took 4.440493ms for pod "etcd-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:24.542361  243963 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:24.546378  243963 pod_ready.go:94] pod "kube-apiserver-embed-certs-422591" is "Ready"
	I1228 06:56:24.546401  243963 pod_ready.go:86] duration metric: took 4.016397ms for pod "kube-apiserver-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:24.548746  243963 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:24.928795  243963 pod_ready.go:94] pod "kube-controller-manager-embed-certs-422591" is "Ready"
	I1228 06:56:24.928827  243963 pod_ready.go:86] duration metric: took 380.060187ms for pod "kube-controller-manager-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:25.129424  243963 pod_ready.go:83] waiting for pod "kube-proxy-j2dkd" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:25.528796  243963 pod_ready.go:94] pod "kube-proxy-j2dkd" is "Ready"
	I1228 06:56:25.528829  243963 pod_ready.go:86] duration metric: took 399.379664ms for pod "kube-proxy-j2dkd" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:25.728149  243963 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:26.129240  243963 pod_ready.go:94] pod "kube-scheduler-embed-certs-422591" is "Ready"
	I1228 06:56:26.129352  243963 pod_ready.go:86] duration metric: took 401.16633ms for pod "kube-scheduler-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:26.129383  243963 pod_ready.go:40] duration metric: took 1.605872095s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 06:56:26.195003  243963 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1228 06:56:26.196497  243963 out.go:179] * Done! kubectl is now configured to use "embed-certs-422591" cluster and "default" namespace by default
	W1228 06:56:22.478649  247213 node_ready.go:57] node "default-k8s-diff-port-500581" has "Ready":"False" status (will retry)
	W1228 06:56:24.977721  247213 node_ready.go:57] node "default-k8s-diff-port-500581" has "Ready":"False" status (will retry)
	I1228 06:56:26.478547  247213 node_ready.go:49] node "default-k8s-diff-port-500581" is "Ready"
	I1228 06:56:26.478581  247213 node_ready.go:38] duration metric: took 12.504894114s for node "default-k8s-diff-port-500581" to be "Ready" ...
	I1228 06:56:26.478597  247213 api_server.go:52] waiting for apiserver process to appear ...
	I1228 06:56:26.478645  247213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 06:56:26.500009  247213 api_server.go:72] duration metric: took 12.844753456s to wait for apiserver process to appear ...
	I1228 06:56:26.500069  247213 api_server.go:88] waiting for apiserver healthz status ...
	I1228 06:56:26.500092  247213 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1228 06:56:26.505791  247213 api_server.go:325] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1228 06:56:26.506819  247213 api_server.go:141] control plane version: v1.35.0
	I1228 06:56:26.506850  247213 api_server.go:131] duration metric: took 6.772745ms to wait for apiserver health ...
	I1228 06:56:26.506860  247213 system_pods.go:43] waiting for kube-system pods to appear ...
	I1228 06:56:26.511152  247213 system_pods.go:59] 8 kube-system pods found
	I1228 06:56:26.511188  247213 system_pods.go:61] "coredns-7d764666f9-9glh9" [9c46cd6f-643c-4dcd-9ffd-88becb063b24] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:56:26.511196  247213 system_pods.go:61] "etcd-default-k8s-diff-port-500581" [03cb3e5a-cdc5-4c45-983f-05b0f5326abe] Running
	I1228 06:56:26.511210  247213 system_pods.go:61] "kindnet-lsrww" [b5bd3c1b-325d-46fe-9378-779822f0ba5b] Running
	I1228 06:56:26.511217  247213 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-500581" [8a2ed107-3bb0-4c8c-bdff-d7011ae71058] Running
	I1228 06:56:26.511223  247213 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-500581" [933ddecb-a0d9-44c2-abb5-d7629a8107eb] Running
	I1228 06:56:26.511228  247213 system_pods.go:61] "kube-proxy-95gmh" [f25e4b21-a201-4838-b7c9-a5fde3304662] Running
	I1228 06:56:26.511237  247213 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-500581" [35caf688-1337-46ff-b025-5d59373ae8e4] Running
	I1228 06:56:26.511245  247213 system_pods.go:61] "storage-provisioner" [12b3784f-bffe-49c1-8915-2011c07bee4e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:56:26.511257  247213 system_pods.go:74] duration metric: took 4.390309ms to wait for pod list to return data ...
	I1228 06:56:26.511272  247213 default_sa.go:34] waiting for default service account to be created ...
	I1228 06:56:26.516259  247213 default_sa.go:45] found service account: "default"
	I1228 06:56:26.516290  247213 default_sa.go:55] duration metric: took 5.010014ms for default service account to be created ...
	I1228 06:56:26.516302  247213 system_pods.go:116] waiting for k8s-apps to be running ...
	I1228 06:56:26.522640  247213 system_pods.go:86] 8 kube-system pods found
	I1228 06:56:26.522682  247213 system_pods.go:89] "coredns-7d764666f9-9glh9" [9c46cd6f-643c-4dcd-9ffd-88becb063b24] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:56:26.522692  247213 system_pods.go:89] "etcd-default-k8s-diff-port-500581" [03cb3e5a-cdc5-4c45-983f-05b0f5326abe] Running
	I1228 06:56:26.522701  247213 system_pods.go:89] "kindnet-lsrww" [b5bd3c1b-325d-46fe-9378-779822f0ba5b] Running
	I1228 06:56:26.522706  247213 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-500581" [8a2ed107-3bb0-4c8c-bdff-d7011ae71058] Running
	I1228 06:56:26.522712  247213 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-500581" [933ddecb-a0d9-44c2-abb5-d7629a8107eb] Running
	I1228 06:56:26.522718  247213 system_pods.go:89] "kube-proxy-95gmh" [f25e4b21-a201-4838-b7c9-a5fde3304662] Running
	I1228 06:56:26.522725  247213 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-500581" [35caf688-1337-46ff-b025-5d59373ae8e4] Running
	I1228 06:56:26.522732  247213 system_pods.go:89] "storage-provisioner" [12b3784f-bffe-49c1-8915-2011c07bee4e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:56:26.522761  247213 retry.go:84] will retry after 200ms: missing components: kube-dns
	I1228 06:56:26.727648  247213 system_pods.go:86] 8 kube-system pods found
	I1228 06:56:26.727695  247213 system_pods.go:89] "coredns-7d764666f9-9glh9" [9c46cd6f-643c-4dcd-9ffd-88becb063b24] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:56:26.727705  247213 system_pods.go:89] "etcd-default-k8s-diff-port-500581" [03cb3e5a-cdc5-4c45-983f-05b0f5326abe] Running
	I1228 06:56:26.727714  247213 system_pods.go:89] "kindnet-lsrww" [b5bd3c1b-325d-46fe-9378-779822f0ba5b] Running
	I1228 06:56:26.727719  247213 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-500581" [8a2ed107-3bb0-4c8c-bdff-d7011ae71058] Running
	I1228 06:56:26.727726  247213 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-500581" [933ddecb-a0d9-44c2-abb5-d7629a8107eb] Running
	I1228 06:56:26.727733  247213 system_pods.go:89] "kube-proxy-95gmh" [f25e4b21-a201-4838-b7c9-a5fde3304662] Running
	I1228 06:56:26.727739  247213 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-500581" [35caf688-1337-46ff-b025-5d59373ae8e4] Running
	I1228 06:56:26.727753  247213 system_pods.go:89] "storage-provisioner" [12b3784f-bffe-49c1-8915-2011c07bee4e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:56:27.048953  247213 system_pods.go:86] 8 kube-system pods found
	I1228 06:56:27.048983  247213 system_pods.go:89] "coredns-7d764666f9-9glh9" [9c46cd6f-643c-4dcd-9ffd-88becb063b24] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:56:27.048988  247213 system_pods.go:89] "etcd-default-k8s-diff-port-500581" [03cb3e5a-cdc5-4c45-983f-05b0f5326abe] Running
	I1228 06:56:27.048995  247213 system_pods.go:89] "kindnet-lsrww" [b5bd3c1b-325d-46fe-9378-779822f0ba5b] Running
	I1228 06:56:27.048999  247213 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-500581" [8a2ed107-3bb0-4c8c-bdff-d7011ae71058] Running
	I1228 06:56:27.049002  247213 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-500581" [933ddecb-a0d9-44c2-abb5-d7629a8107eb] Running
	I1228 06:56:27.049006  247213 system_pods.go:89] "kube-proxy-95gmh" [f25e4b21-a201-4838-b7c9-a5fde3304662] Running
	I1228 06:56:27.049012  247213 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-500581" [35caf688-1337-46ff-b025-5d59373ae8e4] Running
	I1228 06:56:27.049019  247213 system_pods.go:89] "storage-provisioner" [12b3784f-bffe-49c1-8915-2011c07bee4e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:56:27.347697  247213 system_pods.go:86] 8 kube-system pods found
	I1228 06:56:27.347744  247213 system_pods.go:89] "coredns-7d764666f9-9glh9" [9c46cd6f-643c-4dcd-9ffd-88becb063b24] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:56:27.347753  247213 system_pods.go:89] "etcd-default-k8s-diff-port-500581" [03cb3e5a-cdc5-4c45-983f-05b0f5326abe] Running
	I1228 06:56:27.347761  247213 system_pods.go:89] "kindnet-lsrww" [b5bd3c1b-325d-46fe-9378-779822f0ba5b] Running
	I1228 06:56:27.347767  247213 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-500581" [8a2ed107-3bb0-4c8c-bdff-d7011ae71058] Running
	I1228 06:56:27.347773  247213 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-500581" [933ddecb-a0d9-44c2-abb5-d7629a8107eb] Running
	I1228 06:56:27.347779  247213 system_pods.go:89] "kube-proxy-95gmh" [f25e4b21-a201-4838-b7c9-a5fde3304662] Running
	I1228 06:56:27.347784  247213 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-500581" [35caf688-1337-46ff-b025-5d59373ae8e4] Running
	I1228 06:56:27.347792  247213 system_pods.go:89] "storage-provisioner" [12b3784f-bffe-49c1-8915-2011c07bee4e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:56:27.894612  247213 system_pods.go:86] 8 kube-system pods found
	I1228 06:56:27.894645  247213 system_pods.go:89] "coredns-7d764666f9-9glh9" [9c46cd6f-643c-4dcd-9ffd-88becb063b24] Running
	I1228 06:56:27.894654  247213 system_pods.go:89] "etcd-default-k8s-diff-port-500581" [03cb3e5a-cdc5-4c45-983f-05b0f5326abe] Running
	I1228 06:56:27.894661  247213 system_pods.go:89] "kindnet-lsrww" [b5bd3c1b-325d-46fe-9378-779822f0ba5b] Running
	I1228 06:56:27.894668  247213 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-500581" [8a2ed107-3bb0-4c8c-bdff-d7011ae71058] Running
	I1228 06:56:27.894674  247213 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-500581" [933ddecb-a0d9-44c2-abb5-d7629a8107eb] Running
	I1228 06:56:27.894747  247213 system_pods.go:89] "kube-proxy-95gmh" [f25e4b21-a201-4838-b7c9-a5fde3304662] Running
	I1228 06:56:27.894780  247213 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-500581" [35caf688-1337-46ff-b025-5d59373ae8e4] Running
	I1228 06:56:27.894786  247213 system_pods.go:89] "storage-provisioner" [12b3784f-bffe-49c1-8915-2011c07bee4e] Running
	I1228 06:56:27.894796  247213 system_pods.go:126] duration metric: took 1.378485807s to wait for k8s-apps to be running ...
	I1228 06:56:27.894807  247213 system_svc.go:44] waiting for kubelet service to be running ....
	I1228 06:56:27.894877  247213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:56:27.913725  247213 system_svc.go:56] duration metric: took 18.908162ms WaitForService to wait for kubelet
	I1228 06:56:27.913765  247213 kubeadm.go:587] duration metric: took 14.258529006s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 06:56:27.913788  247213 node_conditions.go:102] verifying NodePressure condition ...
	I1228 06:56:27.917024  247213 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1228 06:56:27.917082  247213 node_conditions.go:123] node cpu capacity is 8
	I1228 06:56:27.917101  247213 node_conditions.go:105] duration metric: took 3.307449ms to run NodePressure ...
	I1228 06:56:27.917117  247213 start.go:242] waiting for startup goroutines ...
	I1228 06:56:27.917128  247213 start.go:247] waiting for cluster config update ...
	I1228 06:56:27.917147  247213 start.go:256] writing updated cluster config ...
	I1228 06:56:27.917432  247213 ssh_runner.go:195] Run: rm -f paused
	I1228 06:56:27.922292  247213 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 06:56:27.928675  247213 pod_ready.go:83] waiting for pod "coredns-7d764666f9-9glh9" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:27.933976  247213 pod_ready.go:94] pod "coredns-7d764666f9-9glh9" is "Ready"
	I1228 06:56:27.934000  247213 pod_ready.go:86] duration metric: took 5.293782ms for pod "coredns-7d764666f9-9glh9" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:27.952822  247213 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:27.957941  247213 pod_ready.go:94] pod "etcd-default-k8s-diff-port-500581" is "Ready"
	I1228 06:56:27.957969  247213 pod_ready.go:86] duration metric: took 5.117578ms for pod "etcd-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:27.960256  247213 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:27.964517  247213 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-500581" is "Ready"
	I1228 06:56:27.964541  247213 pod_ready.go:86] duration metric: took 4.26155ms for pod "kube-apiserver-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:27.966612  247213 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:28.326675  247213 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-500581" is "Ready"
	I1228 06:56:28.326711  247213 pod_ready.go:86] duration metric: took 360.070556ms for pod "kube-controller-manager-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:28.527492  247213 pod_ready.go:83] waiting for pod "kube-proxy-95gmh" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:28.926562  247213 pod_ready.go:94] pod "kube-proxy-95gmh" is "Ready"
	I1228 06:56:28.926586  247213 pod_ready.go:86] duration metric: took 398.654778ms for pod "kube-proxy-95gmh" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:29.128257  247213 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:29.527347  247213 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-500581" is "Ready"
	I1228 06:56:29.527373  247213 pod_ready.go:86] duration metric: took 399.091542ms for pod "kube-scheduler-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:29.527384  247213 pod_ready.go:40] duration metric: took 1.605062412s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 06:56:29.572470  247213 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1228 06:56:29.574045  247213 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-500581" cluster and "default" namespace by default
	W1228 06:56:24.687607  252331 pod_ready.go:104] pod "coredns-7d764666f9-npk6g" is not "Ready", error: <nil>
	W1228 06:56:27.187235  252331 pod_ready.go:104] pod "coredns-7d764666f9-npk6g" is not "Ready", error: <nil>
	W1228 06:56:26.754423  242715 pod_ready.go:104] pod "coredns-5dd5756b68-f75js" is not "Ready", error: <nil>
	I1228 06:56:28.252283  242715 pod_ready.go:94] pod "coredns-5dd5756b68-f75js" is "Ready"
	I1228 06:56:28.252312  242715 pod_ready.go:86] duration metric: took 34.005583819s for pod "coredns-5dd5756b68-f75js" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:28.255219  242715 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-694122" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:28.259146  242715 pod_ready.go:94] pod "etcd-old-k8s-version-694122" is "Ready"
	I1228 06:56:28.259168  242715 pod_ready.go:86] duration metric: took 3.930339ms for pod "etcd-old-k8s-version-694122" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:28.261639  242715 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-694122" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:28.265232  242715 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-694122" is "Ready"
	I1228 06:56:28.265251  242715 pod_ready.go:86] duration metric: took 3.589847ms for pod "kube-apiserver-old-k8s-version-694122" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:28.267802  242715 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-694122" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:28.450233  242715 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-694122" is "Ready"
	I1228 06:56:28.450266  242715 pod_ready.go:86] duration metric: took 182.442698ms for pod "kube-controller-manager-old-k8s-version-694122" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:28.651005  242715 pod_ready.go:83] waiting for pod "kube-proxy-ckjcc" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:29.050020  242715 pod_ready.go:94] pod "kube-proxy-ckjcc" is "Ready"
	I1228 06:56:29.050071  242715 pod_ready.go:86] duration metric: took 399.008645ms for pod "kube-proxy-ckjcc" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:29.250805  242715 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-694122" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:29.650219  242715 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-694122" is "Ready"
	I1228 06:56:29.650260  242715 pod_ready.go:86] duration metric: took 399.415539ms for pod "kube-scheduler-old-k8s-version-694122" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:29.650277  242715 pod_ready.go:40] duration metric: took 35.408765036s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 06:56:29.699567  242715 start.go:625] kubectl: 1.35.0, cluster: 1.28.0 (minor skew: 7)
	I1228 06:56:29.701172  242715 out.go:203] 
	W1228 06:56:29.702316  242715 out.go:285] ! /usr/local/bin/kubectl is version 1.35.0, which may have incompatibilities with Kubernetes 1.28.0.
	I1228 06:56:29.703412  242715 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1228 06:56:29.704563  242715 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-694122" cluster and "default" namespace by default
	W1228 06:56:29.687654  252331 pod_ready.go:104] pod "coredns-7d764666f9-npk6g" is not "Ready", error: <nil>
	W1228 06:56:32.186292  252331 pod_ready.go:104] pod "coredns-7d764666f9-npk6g" is not "Ready", error: <nil>
	W1228 06:56:34.688806  252331 pod_ready.go:104] pod "coredns-7d764666f9-npk6g" is not "Ready", error: <nil>
	W1228 06:56:37.186324  252331 pod_ready.go:104] pod "coredns-7d764666f9-npk6g" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 28 06:56:12 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:12.583861747Z" level=info msg="Started container" PID=1749 containerID=c94cc1ce890a730cde51c0fd5c25d3ab34d128a957c2289626ac8eee4aac68c1 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mkkbk/dashboard-metrics-scraper id=d8f72510-df38-4a1e-b66d-8200ed7fcfac name=/runtime.v1.RuntimeService/StartContainer sandboxID=0808fc8b39668eb765d43cd53be6d550282e68223c13e4c44416f894f4c89a48
	Dec 28 06:56:13 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:13.530022934Z" level=info msg="Removing container: b2f97b7a4836f393a623ba91543550ff23480a382de7fd86c242856a2df14590" id=154e6c22-398f-4ba2-ab27-5461aa61feb4 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 28 06:56:13 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:13.540722929Z" level=info msg="Removed container b2f97b7a4836f393a623ba91543550ff23480a382de7fd86c242856a2df14590: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mkkbk/dashboard-metrics-scraper" id=154e6c22-398f-4ba2-ab27-5461aa61feb4 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 28 06:56:24 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:24.56020008Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=6242be8e-0820-4d36-a206-dfc6457fbd5a name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:56:24 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:24.563073552Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=3f8fe7a6-c2ae-4b26-af3a-d75d7f6379d6 name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:56:24 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:24.564501421Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=8d1349e6-709e-487a-8a7a-2ffab78862b7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:56:24 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:24.564647689Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:56:24 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:24.571618039Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:56:24 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:24.57204935Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/6107831efab17c6445982290999edcdfbf3a71ff069ed50755394b0bb934f622/merged/etc/passwd: no such file or directory"
	Dec 28 06:56:24 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:24.572086214Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/6107831efab17c6445982290999edcdfbf3a71ff069ed50755394b0bb934f622/merged/etc/group: no such file or directory"
	Dec 28 06:56:24 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:24.572398901Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:56:24 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:24.600189949Z" level=info msg="Created container d7436b6e793ed053cdb548e87a1b716f0abde877c580e91628b6947ecedb7500: kube-system/storage-provisioner/storage-provisioner" id=8d1349e6-709e-487a-8a7a-2ffab78862b7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:56:24 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:24.600783768Z" level=info msg="Starting container: d7436b6e793ed053cdb548e87a1b716f0abde877c580e91628b6947ecedb7500" id=533fb44d-8a59-4498-b438-60834329cc4b name=/runtime.v1.RuntimeService/StartContainer
	Dec 28 06:56:24 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:24.60266285Z" level=info msg="Started container" PID=1764 containerID=d7436b6e793ed053cdb548e87a1b716f0abde877c580e91628b6947ecedb7500 description=kube-system/storage-provisioner/storage-provisioner id=533fb44d-8a59-4498-b438-60834329cc4b name=/runtime.v1.RuntimeService/StartContainer sandboxID=4214aa96db9c297a97f32344eb657fdea04fd2b6d854cc7e84324a2dc8dc18fd
	Dec 28 06:56:26 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:26.434631051Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=dc4e48b5-37de-48a9-a58b-6796bbb51523 name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:56:26 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:26.435689897Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=077bdfb4-65a2-4682-8f7b-28037c07eee6 name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:56:26 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:26.436847019Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mkkbk/dashboard-metrics-scraper" id=4c206b6f-dd78-452f-8e56-04cb49c41a75 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:56:26 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:26.437008869Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:56:26 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:26.444452577Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:56:26 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:26.445249606Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:56:26 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:26.486789085Z" level=info msg="Created container 634b121325efb9b7ad4cb6a1e52246b51f8cdc8531f6282f6fa6f066be872bc0: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mkkbk/dashboard-metrics-scraper" id=4c206b6f-dd78-452f-8e56-04cb49c41a75 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:56:26 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:26.48757547Z" level=info msg="Starting container: 634b121325efb9b7ad4cb6a1e52246b51f8cdc8531f6282f6fa6f066be872bc0" id=2c5c78bc-a428-4228-8e7c-193ba7dbdcc1 name=/runtime.v1.RuntimeService/StartContainer
	Dec 28 06:56:26 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:26.491192029Z" level=info msg="Started container" PID=1779 containerID=634b121325efb9b7ad4cb6a1e52246b51f8cdc8531f6282f6fa6f066be872bc0 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mkkbk/dashboard-metrics-scraper id=2c5c78bc-a428-4228-8e7c-193ba7dbdcc1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0808fc8b39668eb765d43cd53be6d550282e68223c13e4c44416f894f4c89a48
	Dec 28 06:56:26 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:26.568854047Z" level=info msg="Removing container: c94cc1ce890a730cde51c0fd5c25d3ab34d128a957c2289626ac8eee4aac68c1" id=a4054836-346a-45f2-9a60-a74395d55074 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 28 06:56:26 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:26.579146825Z" level=info msg="Removed container c94cc1ce890a730cde51c0fd5c25d3ab34d128a957c2289626ac8eee4aac68c1: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mkkbk/dashboard-metrics-scraper" id=a4054836-346a-45f2-9a60-a74395d55074 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	634b121325efb       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           18 seconds ago      Exited              dashboard-metrics-scraper   2                   0808fc8b39668       dashboard-metrics-scraper-5f989dc9cf-mkkbk       kubernetes-dashboard
	d7436b6e793ed       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         2                   4214aa96db9c2       storage-provisioner                              kube-system
	10a25e4b39812       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   35 seconds ago      Running             kubernetes-dashboard        0                   b3037a7a10e70       kubernetes-dashboard-8694d4445c-qf9rt            kubernetes-dashboard
	6639adc29af47       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   52ceeafae214e       busybox                                          default
	182591cdfe4f6       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           50 seconds ago      Running             coredns                     1                   17f02b5b39d0f       coredns-5dd5756b68-f75js                         kube-system
	5bcdfe2687f64       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           50 seconds ago      Running             kindnet-cni                 1                   07e71cbc44781       kindnet-v7rhd                                    kube-system
	b8f42856aab91       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           51 seconds ago      Running             kube-proxy                  1                   062e68583870d       kube-proxy-ckjcc                                 kube-system
	a008846c26c9e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         1                   4214aa96db9c2       storage-provisioner                              kube-system
	55724a0fe5b72       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           54 seconds ago      Running             kube-apiserver              1                   8a6c7392541b8       kube-apiserver-old-k8s-version-694122            kube-system
	6fcb9aa2e6b19       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           54 seconds ago      Running             etcd                        1                   bcabddefdb866       etcd-old-k8s-version-694122                      kube-system
	c9544b4339f1c       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           54 seconds ago      Running             kube-scheduler              1                   da915cd0b9c90       kube-scheduler-old-k8s-version-694122            kube-system
	f8f115ceec0e8       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           54 seconds ago      Running             kube-controller-manager     1                   8af419195ec82       kube-controller-manager-old-k8s-version-694122   kube-system
	
	
	==> describe nodes <==
	Name:               old-k8s-version-694122
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-694122
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba
	                    minikube.k8s.io/name=old-k8s-version-694122
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_28T06_54_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Dec 2025 06:54:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-694122
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 28 Dec 2025 06:56:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 28 Dec 2025 06:56:23 +0000   Sun, 28 Dec 2025 06:54:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 28 Dec 2025 06:56:23 +0000   Sun, 28 Dec 2025 06:54:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 28 Dec 2025 06:56:23 +0000   Sun, 28 Dec 2025 06:54:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 28 Dec 2025 06:56:23 +0000   Sun, 28 Dec 2025 06:55:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-694122
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 493159aea3d8b8768b108b926950835d
	  System UUID:                65f3a296-84a7-49ed-b5c2-55741073e206
	  Boot ID:                    e7a1d175-ccf2-4135-b9c7-3a9f70f4c4af
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 coredns-5dd5756b68-f75js                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     103s
	  kube-system                 etcd-old-k8s-version-694122                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         117s
	  kube-system                 kindnet-v7rhd                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      103s
	  kube-system                 kube-apiserver-old-k8s-version-694122             250m (3%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-old-k8s-version-694122    200m (2%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-ckjcc                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-scheduler-old-k8s-version-694122             100m (1%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-mkkbk        0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-qf9rt             0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 102s                 kube-proxy       
	  Normal  Starting                 50s                  kube-proxy       
	  Normal  Starting                 2m2s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m2s (x9 over 2m2s)  kubelet          Node old-k8s-version-694122 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m2s (x8 over 2m2s)  kubelet          Node old-k8s-version-694122 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m2s (x7 over 2m2s)  kubelet          Node old-k8s-version-694122 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    117s                 kubelet          Node old-k8s-version-694122 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  117s                 kubelet          Node old-k8s-version-694122 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     117s                 kubelet          Node old-k8s-version-694122 status is now: NodeHasSufficientPID
	  Normal  Starting                 117s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           104s                 node-controller  Node old-k8s-version-694122 event: Registered Node old-k8s-version-694122 in Controller
	  Normal  NodeReady                90s                  kubelet          Node old-k8s-version-694122 status is now: NodeReady
	  Normal  Starting                 55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 55s)    kubelet          Node old-k8s-version-694122 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)    kubelet          Node old-k8s-version-694122 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x8 over 55s)    kubelet          Node old-k8s-version-694122 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           39s                  node-controller  Node old-k8s-version-694122 event: Registered Node old-k8s-version-694122 in Controller
	
	
	==> dmesg <==
	[Dec28 06:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001811] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000998] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.386099] i8042: Warning: Keylock active
	[  +0.010472] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.485785] block sda: the capability attribute has been deprecated.
	[  +0.082391] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024584] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.071522] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 06:56:44 up 39 min,  0 user,  load average: 3.07, 2.66, 1.73
	Linux old-k8s-version-694122 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 28 06:56:05 old-k8s-version-694122 kubelet[723]: I1228 06:56:05.336847     723 topology_manager.go:215] "Topology Admit Handler" podUID="ce4bfb24-cae7-489f-9305-08e4a6df88cc" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-mkkbk"
	Dec 28 06:56:05 old-k8s-version-694122 kubelet[723]: I1228 06:56:05.357506     723 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl6j7\" (UniqueName: \"kubernetes.io/projected/3117b9db-546f-40fa-8346-edd08efb1341-kube-api-access-sl6j7\") pod \"kubernetes-dashboard-8694d4445c-qf9rt\" (UID: \"3117b9db-546f-40fa-8346-edd08efb1341\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-qf9rt"
	Dec 28 06:56:05 old-k8s-version-694122 kubelet[723]: I1228 06:56:05.357560     723 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ce4bfb24-cae7-489f-9305-08e4a6df88cc-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-mkkbk\" (UID: \"ce4bfb24-cae7-489f-9305-08e4a6df88cc\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mkkbk"
	Dec 28 06:56:05 old-k8s-version-694122 kubelet[723]: I1228 06:56:05.357590     723 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/3117b9db-546f-40fa-8346-edd08efb1341-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-qf9rt\" (UID: \"3117b9db-546f-40fa-8346-edd08efb1341\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-qf9rt"
	Dec 28 06:56:05 old-k8s-version-694122 kubelet[723]: I1228 06:56:05.357620     723 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq4qn\" (UniqueName: \"kubernetes.io/projected/ce4bfb24-cae7-489f-9305-08e4a6df88cc-kube-api-access-zq4qn\") pod \"dashboard-metrics-scraper-5f989dc9cf-mkkbk\" (UID: \"ce4bfb24-cae7-489f-9305-08e4a6df88cc\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mkkbk"
	Dec 28 06:56:12 old-k8s-version-694122 kubelet[723]: I1228 06:56:12.523613     723 scope.go:117] "RemoveContainer" containerID="b2f97b7a4836f393a623ba91543550ff23480a382de7fd86c242856a2df14590"
	Dec 28 06:56:12 old-k8s-version-694122 kubelet[723]: I1228 06:56:12.539464     723 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-qf9rt" podStartSLOduration=3.93451934 podCreationTimestamp="2025-12-28 06:56:05 +0000 UTC" firstStartedPulling="2025-12-28 06:56:05.671171109 +0000 UTC m=+16.408637525" lastFinishedPulling="2025-12-28 06:56:09.27605435 +0000 UTC m=+20.013520773" observedRunningTime="2025-12-28 06:56:09.548570585 +0000 UTC m=+20.286037009" watchObservedRunningTime="2025-12-28 06:56:12.539402588 +0000 UTC m=+23.276869011"
	Dec 28 06:56:13 old-k8s-version-694122 kubelet[723]: I1228 06:56:13.528390     723 scope.go:117] "RemoveContainer" containerID="b2f97b7a4836f393a623ba91543550ff23480a382de7fd86c242856a2df14590"
	Dec 28 06:56:13 old-k8s-version-694122 kubelet[723]: I1228 06:56:13.528574     723 scope.go:117] "RemoveContainer" containerID="c94cc1ce890a730cde51c0fd5c25d3ab34d128a957c2289626ac8eee4aac68c1"
	Dec 28 06:56:13 old-k8s-version-694122 kubelet[723]: E1228 06:56:13.528926     723 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-mkkbk_kubernetes-dashboard(ce4bfb24-cae7-489f-9305-08e4a6df88cc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mkkbk" podUID="ce4bfb24-cae7-489f-9305-08e4a6df88cc"
	Dec 28 06:56:14 old-k8s-version-694122 kubelet[723]: I1228 06:56:14.532491     723 scope.go:117] "RemoveContainer" containerID="c94cc1ce890a730cde51c0fd5c25d3ab34d128a957c2289626ac8eee4aac68c1"
	Dec 28 06:56:14 old-k8s-version-694122 kubelet[723]: E1228 06:56:14.532843     723 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-mkkbk_kubernetes-dashboard(ce4bfb24-cae7-489f-9305-08e4a6df88cc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mkkbk" podUID="ce4bfb24-cae7-489f-9305-08e4a6df88cc"
	Dec 28 06:56:15 old-k8s-version-694122 kubelet[723]: I1228 06:56:15.639218     723 scope.go:117] "RemoveContainer" containerID="c94cc1ce890a730cde51c0fd5c25d3ab34d128a957c2289626ac8eee4aac68c1"
	Dec 28 06:56:15 old-k8s-version-694122 kubelet[723]: E1228 06:56:15.639481     723 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-mkkbk_kubernetes-dashboard(ce4bfb24-cae7-489f-9305-08e4a6df88cc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mkkbk" podUID="ce4bfb24-cae7-489f-9305-08e4a6df88cc"
	Dec 28 06:56:24 old-k8s-version-694122 kubelet[723]: I1228 06:56:24.559220     723 scope.go:117] "RemoveContainer" containerID="a008846c26c9eef13eef5f37d0f0812a09129981cff37d7db2ce2c8b39b00d8c"
	Dec 28 06:56:26 old-k8s-version-694122 kubelet[723]: I1228 06:56:26.433888     723 scope.go:117] "RemoveContainer" containerID="c94cc1ce890a730cde51c0fd5c25d3ab34d128a957c2289626ac8eee4aac68c1"
	Dec 28 06:56:26 old-k8s-version-694122 kubelet[723]: I1228 06:56:26.567464     723 scope.go:117] "RemoveContainer" containerID="c94cc1ce890a730cde51c0fd5c25d3ab34d128a957c2289626ac8eee4aac68c1"
	Dec 28 06:56:26 old-k8s-version-694122 kubelet[723]: I1228 06:56:26.567775     723 scope.go:117] "RemoveContainer" containerID="634b121325efb9b7ad4cb6a1e52246b51f8cdc8531f6282f6fa6f066be872bc0"
	Dec 28 06:56:26 old-k8s-version-694122 kubelet[723]: E1228 06:56:26.568346     723 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-mkkbk_kubernetes-dashboard(ce4bfb24-cae7-489f-9305-08e4a6df88cc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mkkbk" podUID="ce4bfb24-cae7-489f-9305-08e4a6df88cc"
	Dec 28 06:56:35 old-k8s-version-694122 kubelet[723]: I1228 06:56:35.639392     723 scope.go:117] "RemoveContainer" containerID="634b121325efb9b7ad4cb6a1e52246b51f8cdc8531f6282f6fa6f066be872bc0"
	Dec 28 06:56:35 old-k8s-version-694122 kubelet[723]: E1228 06:56:35.639655     723 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-mkkbk_kubernetes-dashboard(ce4bfb24-cae7-489f-9305-08e4a6df88cc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mkkbk" podUID="ce4bfb24-cae7-489f-9305-08e4a6df88cc"
	Dec 28 06:56:41 old-k8s-version-694122 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 28 06:56:41 old-k8s-version-694122 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 28 06:56:41 old-k8s-version-694122 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 06:56:41 old-k8s-version-694122 systemd[1]: kubelet.service: Consumed 1.537s CPU time.
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1228 06:56:44.187985  258694 logs.go:279] Failed to list containers for "kube-apiserver": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:44Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:56:44.250700  258694 logs.go:279] Failed to list containers for "etcd": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:44Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:56:44.312966  258694 logs.go:279] Failed to list containers for "coredns": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:44Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:56:44.374232  258694 logs.go:279] Failed to list containers for "kube-scheduler": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:44Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:56:44.434744  258694 logs.go:279] Failed to list containers for "kube-proxy": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:44Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:56:44.495340  258694 logs.go:279] Failed to list containers for "kube-controller-manager": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:44Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:56:44.557873  258694 logs.go:279] Failed to list containers for "kindnet": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:44Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:56:44.618891  258694 logs.go:279] Failed to list containers for "storage-provisioner": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:44Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:56:44.685018  258694 logs.go:279] Failed to list containers for "kubernetes-dashboard": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:44Z" level=error msg="open /run/runc: no such file or directory"

                                                
                                                
** /stderr **
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-694122 -n old-k8s-version-694122
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-694122 -n old-k8s-version-694122: exit status 2 (319.957389ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-694122 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-694122
helpers_test.go:244: (dbg) docker inspect old-k8s-version-694122:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0dd1cc4ae5d6c069007f47d3844c99e6fd488856031b6098669f2a2d9266b8e4",
	        "Created": "2025-12-28T06:54:32.483449473Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 242921,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-28T06:55:40.77317765Z",
	            "FinishedAt": "2025-12-28T06:55:39.878075511Z"
	        },
	        "Image": "sha256:8b8cccb9afb2a57c3d011fcf33e0403b1551aa7036e30b12a395646869801935",
	        "ResolvConfPath": "/var/lib/docker/containers/0dd1cc4ae5d6c069007f47d3844c99e6fd488856031b6098669f2a2d9266b8e4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0dd1cc4ae5d6c069007f47d3844c99e6fd488856031b6098669f2a2d9266b8e4/hostname",
	        "HostsPath": "/var/lib/docker/containers/0dd1cc4ae5d6c069007f47d3844c99e6fd488856031b6098669f2a2d9266b8e4/hosts",
	        "LogPath": "/var/lib/docker/containers/0dd1cc4ae5d6c069007f47d3844c99e6fd488856031b6098669f2a2d9266b8e4/0dd1cc4ae5d6c069007f47d3844c99e6fd488856031b6098669f2a2d9266b8e4-json.log",
	        "Name": "/old-k8s-version-694122",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-694122:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-694122",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0dd1cc4ae5d6c069007f47d3844c99e6fd488856031b6098669f2a2d9266b8e4",
	                "LowerDir": "/var/lib/docker/overlay2/0e198016d10833ae2b69d72eb0480c9e3ae293195212da3a517ed434306dae9b-init/diff:/var/lib/docker/overlay2/69e554713d6cc3cb33e7ea5f93430536a8ca0db38320574d3719c26f00b2f62c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0e198016d10833ae2b69d72eb0480c9e3ae293195212da3a517ed434306dae9b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0e198016d10833ae2b69d72eb0480c9e3ae293195212da3a517ed434306dae9b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0e198016d10833ae2b69d72eb0480c9e3ae293195212da3a517ed434306dae9b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-694122",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-694122/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-694122",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-694122",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-694122",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "f8a2c1f2ca0edda7fc61319821d1b1b9478e21a8166e58c5ceefe4687ad1185e",
	            "SandboxKey": "/var/run/docker/netns/f8a2c1f2ca0e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-694122": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "910bcfa8529441ad2bfa62f448459947be2ed515eaa365c95b9fc10d53f59423",
	                    "EndpointID": "c5067af631bf950ff3e81937626ccb025d7006ff6324b6a2e17ab1a68dd827b4",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "4a:77:23:35:01:07",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-694122",
	                        "0dd1cc4ae5d6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-694122 -n old-k8s-version-694122
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-694122 -n old-k8s-version-694122: exit status 2 (317.496272ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-694122 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-694122 logs -n 25: (1.04266044s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p test-preload-785573                                                                                                                                                                                                                        │ test-preload-785573          │ jenkins │ v1.37.0 │ 28 Dec 25 06:54 UTC │ 28 Dec 25 06:54 UTC │
	│ start   │ -p cert-expiration-623987 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-623987       │ jenkins │ v1.37.0 │ 28 Dec 25 06:54 UTC │ 28 Dec 25 06:54 UTC │
	│ start   │ -p test-preload-785573 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio                                                                                                                            │ test-preload-785573          │ jenkins │ v1.37.0 │ 28 Dec 25 06:54 UTC │ 28 Dec 25 06:55 UTC │
	│ delete  │ -p cert-expiration-623987                                                                                                                                                                                                                     │ cert-expiration-623987       │ jenkins │ v1.37.0 │ 28 Dec 25 06:54 UTC │ 28 Dec 25 06:54 UTC │
	│ start   │ -p no-preload-950460 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-694122 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │                     │
	│ stop    │ -p old-k8s-version-694122 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-694122 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ start   │ -p old-k8s-version-694122 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:56 UTC │
	│ image   │ test-preload-785573 image list                                                                                                                                                                                                                │ test-preload-785573          │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ delete  │ -p test-preload-785573                                                                                                                                                                                                                        │ test-preload-785573          │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ start   │ -p embed-certs-422591 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:56 UTC │
	│ delete  │ -p stopped-upgrade-416029                                                                                                                                                                                                                     │ stopped-upgrade-416029       │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ delete  │ -p disable-driver-mounts-719168                                                                                                                                                                                                               │ disable-driver-mounts-719168 │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ start   │ -p default-k8s-diff-port-500581 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:56 UTC │
	│ addons  │ enable metrics-server -p no-preload-950460 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │                     │
	│ stop    │ -p no-preload-950460 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:56 UTC │
	│ addons  │ enable dashboard -p no-preload-950460 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ start   │ -p no-preload-950460 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-422591 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	│ stop    │ -p embed-certs-422591 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-500581 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-500581 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	│ image   │ old-k8s-version-694122 image list --format=json                                                                                                                                                                                               │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ pause   │ -p old-k8s-version-694122 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 06:56:09
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 06:56:09.683208  252331 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:56:09.683522  252331 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:56:09.683533  252331 out.go:374] Setting ErrFile to fd 2...
	I1228 06:56:09.683539  252331 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:56:09.683817  252331 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:56:09.684408  252331 out.go:368] Setting JSON to false
	I1228 06:56:09.686138  252331 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2322,"bootTime":1766902648,"procs":376,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1228 06:56:09.686216  252331 start.go:143] virtualization: kvm guest
	I1228 06:56:09.688379  252331 out.go:179] * [no-preload-950460] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1228 06:56:09.689966  252331 notify.go:221] Checking for updates...
	I1228 06:56:09.690624  252331 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 06:56:09.691759  252331 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 06:56:09.693287  252331 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:56:09.694542  252331 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-5550/.minikube
	I1228 06:56:09.696489  252331 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1228 06:56:09.698353  252331 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 06:56:09.700204  252331 config.go:182] Loaded profile config "no-preload-950460": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:56:09.700981  252331 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 06:56:09.731534  252331 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1228 06:56:09.731673  252331 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:56:09.809872  252331 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-28 06:56:09.797345649 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:56:09.810012  252331 docker.go:319] overlay module found
	I1228 06:56:09.811872  252331 out.go:179] * Using the docker driver based on existing profile
	I1228 06:56:09.813113  252331 start.go:309] selected driver: docker
	I1228 06:56:09.813141  252331 start.go:928] validating driver "docker" against &{Name:no-preload-950460 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-950460 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:56:09.813261  252331 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 06:56:09.814183  252331 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:56:09.889225  252331 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-28 06:56:09.87743098 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:56:09.889583  252331 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 06:56:09.889616  252331 cni.go:84] Creating CNI manager for ""
	I1228 06:56:09.889688  252331 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1228 06:56:09.889728  252331 start.go:353] cluster config:
	{Name:no-preload-950460 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-950460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:56:09.892491  252331 out.go:179] * Starting "no-preload-950460" primary control-plane node in "no-preload-950460" cluster
	I1228 06:56:09.893559  252331 cache.go:134] Beginning downloading kic base image for docker with crio
	I1228 06:56:09.895822  252331 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 06:56:09.897246  252331 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1228 06:56:09.897378  252331 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/config.json ...
	I1228 06:56:09.897665  252331 cache.go:107] acquiring lock: {Name:mkd9176dc8bfe34090aff279f6f101ea6f0af9cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:56:09.897748  252331 cache.go:115] /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1228 06:56:09.897763  252331 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 110.737µs
	I1228 06:56:09.897776  252331 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1228 06:56:09.897792  252331 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 06:56:09.897921  252331 cache.go:107] acquiring lock: {Name:mk7d35a6d2b389149dcbeab5c7c2ffb31f57d65c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:56:09.898003  252331 cache.go:115] /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0 exists
	I1228 06:56:09.898018  252331 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0" -> "/home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0" took 105.145µs
	I1228 06:56:09.898051  252331 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0 -> /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0 succeeded
	I1228 06:56:09.898068  252331 cache.go:107] acquiring lock: {Name:mk242447cc3bf85a80c449b21152ddfbb942621c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:56:09.898065  252331 cache.go:107] acquiring lock: {Name:mke2c1949855d4a55e5668b0d2ae93b37c482c30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:56:09.898080  252331 cache.go:107] acquiring lock: {Name:mk532de4689e044277857a73866e5969a2e4fbc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:56:09.898114  252331 cache.go:115] /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0 exists
	I1228 06:56:09.898091  252331 cache.go:107] acquiring lock: {Name:mke47ac9c7c044600bef8f6b93ef0e26dc8302f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:56:09.898122  252331 cache.go:115] /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0 exists
	I1228 06:56:09.898122  252331 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0" -> "/home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0" took 56.777µs
	I1228 06:56:09.898131  252331 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0 -> /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0 succeeded
	I1228 06:56:09.898131  252331 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0" -> "/home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0" took 104.803µs
	I1228 06:56:09.898140  252331 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0 -> /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0 succeeded
	I1228 06:56:09.898147  252331 cache.go:107] acquiring lock: {Name:mk9e59e568752d1ca479b7f88a0993095cc4ab42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:56:09.898154  252331 cache.go:107] acquiring lock: {Name:mk4a1a601fb4bce5015f4152fc8c90f967d969a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:56:09.898175  252331 cache.go:115] /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 exists
	I1228 06:56:09.898185  252331 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0" took 104.327µs
	I1228 06:56:09.898197  252331 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1228 06:56:09.898201  252331 cache.go:115] /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0 exists
	I1228 06:56:09.898209  252331 cache.go:115] /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1228 06:56:09.898214  252331 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0" -> "/home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0" took 145.471µs
	I1228 06:56:09.898217  252331 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 65.787µs
	I1228 06:56:09.898225  252331 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0 -> /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0 succeeded
	I1228 06:56:09.898228  252331 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1228 06:56:09.898247  252331 cache.go:115] /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1228 06:56:09.898255  252331 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 110.483µs
	I1228 06:56:09.898263  252331 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22352-5550/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1228 06:56:09.898271  252331 cache.go:87] Successfully saved all images to host disk.
	I1228 06:56:09.925389  252331 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 06:56:09.925420  252331 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
	I1228 06:56:09.925442  252331 cache.go:243] Successfully downloaded all kic artifacts
	I1228 06:56:09.925482  252331 start.go:360] acquireMachinesLock for no-preload-950460: {Name:mk62d7b73784bafca52412532a69147c30805a01 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:56:09.925562  252331 start.go:364] duration metric: took 47.499µs to acquireMachinesLock for "no-preload-950460"
	I1228 06:56:09.925594  252331 start.go:96] Skipping create...Using existing machine configuration
	I1228 06:56:09.925604  252331 fix.go:54] fixHost starting: 
	I1228 06:56:09.925883  252331 cli_runner.go:164] Run: docker container inspect no-preload-950460 --format={{.State.Status}}
	I1228 06:56:09.947427  252331 fix.go:112] recreateIfNeeded on no-preload-950460: state=Stopped err=<nil>
	W1228 06:56:09.947470  252331 fix.go:138] unexpected machine state, will restart: <nil>
	I1228 06:56:09.244143  243963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:09.744639  243963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:10.244325  243963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:10.364365  243963 kubeadm.go:1114] duration metric: took 4.219411016s to wait for elevateKubeSystemPrivileges
	I1228 06:56:10.364473  243963 kubeadm.go:403] duration metric: took 12.104828541s to StartCluster
	I1228 06:56:10.364513  243963 settings.go:142] acquiring lock: {Name:mk84c1d4c127eaf11c7cc5cc16de86f86f2dcbe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:10.364574  243963 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:56:10.367334  243963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/kubeconfig: {Name:mke0b45a06b272ee5aa493b62cdbe6f2a53c0aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:10.367689  243963 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1228 06:56:10.368151  243963 config.go:182] Loaded profile config "embed-certs-422591": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:56:10.368391  243963 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1228 06:56:10.368490  243963 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-422591"
	I1228 06:56:10.368509  243963 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-422591"
	I1228 06:56:10.368558  243963 host.go:66] Checking if "embed-certs-422591" exists ...
	I1228 06:56:10.369000  243963 cli_runner.go:164] Run: docker container inspect embed-certs-422591 --format={{.State.Status}}
	I1228 06:56:10.369135  243963 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1228 06:56:10.369221  243963 addons.go:70] Setting default-storageclass=true in profile "embed-certs-422591"
	I1228 06:56:10.369280  243963 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-422591"
	I1228 06:56:10.369857  243963 cli_runner.go:164] Run: docker container inspect embed-certs-422591 --format={{.State.Status}}
	I1228 06:56:10.370623  243963 out.go:179] * Verifying Kubernetes components...
	I1228 06:56:10.374484  243963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:56:10.403086  243963 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1228 06:56:07.752961  242715 pod_ready.go:104] pod "coredns-5dd5756b68-f75js" is not "Ready", error: <nil>
	W1228 06:56:09.756311  242715 pod_ready.go:104] pod "coredns-5dd5756b68-f75js" is not "Ready", error: <nil>
	I1228 06:56:10.405267  243963 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:56:10.405293  243963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1228 06:56:10.405355  243963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422591
	I1228 06:56:10.407121  243963 addons.go:239] Setting addon default-storageclass=true in "embed-certs-422591"
	I1228 06:56:10.407166  243963 host.go:66] Checking if "embed-certs-422591" exists ...
	I1228 06:56:10.408137  243963 cli_runner.go:164] Run: docker container inspect embed-certs-422591 --format={{.State.Status}}
	I1228 06:56:10.438924  243963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/embed-certs-422591/id_rsa Username:docker}
	I1228 06:56:10.442747  243963 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1228 06:56:10.442772  243963 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1228 06:56:10.442827  243963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422591
	I1228 06:56:10.477359  243963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/embed-certs-422591/id_rsa Username:docker}
	I1228 06:56:10.532358  243963 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1228 06:56:10.573979  243963 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:56:10.588218  243963 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:56:10.648019  243963 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1228 06:56:10.867869  243963 start.go:987] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1228 06:56:11.085832  243963 node_ready.go:35] waiting up to 6m0s for node "embed-certs-422591" to be "Ready" ...
	I1228 06:56:11.095783  243963 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1228 06:56:09.058672  247213 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1228 06:56:09.063442  247213 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1228 06:56:09.063466  247213 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1228 06:56:09.077870  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1228 06:56:09.407176  247213 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1228 06:56:09.407367  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:09.407468  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-500581 minikube.k8s.io/updated_at=2025_12_28T06_56_09_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba minikube.k8s.io/name=default-k8s-diff-port-500581 minikube.k8s.io/primary=true
	I1228 06:56:09.580457  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:09.580543  247213 ops.go:34] apiserver oom_adj: -16
	I1228 06:56:10.080579  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:10.581243  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:11.080638  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:11.581312  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:12.080705  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:12.580620  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:13.081161  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:13.581441  247213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:56:13.652690  247213 kubeadm.go:1114] duration metric: took 4.245373726s to wait for elevateKubeSystemPrivileges
	I1228 06:56:13.652726  247213 kubeadm.go:403] duration metric: took 12.364737655s to StartCluster
	I1228 06:56:13.652748  247213 settings.go:142] acquiring lock: {Name:mk84c1d4c127eaf11c7cc5cc16de86f86f2dcbe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:13.652812  247213 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:56:13.654909  247213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/kubeconfig: {Name:mke0b45a06b272ee5aa493b62cdbe6f2a53c0aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:13.655206  247213 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1228 06:56:13.655359  247213 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1228 06:56:13.655613  247213 config.go:182] Loaded profile config "default-k8s-diff-port-500581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:56:13.655657  247213 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1228 06:56:13.655720  247213 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-500581"
	I1228 06:56:13.655737  247213 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-500581"
	I1228 06:56:13.655761  247213 host.go:66] Checking if "default-k8s-diff-port-500581" exists ...
	I1228 06:56:13.656261  247213 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-500581"
	I1228 06:56:13.656283  247213 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-500581"
	I1228 06:56:13.656613  247213 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-500581 --format={{.State.Status}}
	I1228 06:56:13.657602  247213 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-500581 --format={{.State.Status}}
	I1228 06:56:13.660155  247213 out.go:179] * Verifying Kubernetes components...
	I1228 06:56:13.661579  247213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:56:13.684520  247213 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1228 06:56:11.097178  243963 addons.go:530] duration metric: took 728.781424ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1228 06:56:11.372202  243963 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-422591" context rescaled to 1 replicas
	W1228 06:56:13.088569  243963 node_ready.go:57] node "embed-certs-422591" has "Ready":"False" status (will retry)
	I1228 06:56:13.685585  247213 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:56:13.685607  247213 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1228 06:56:13.685662  247213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-500581
	I1228 06:56:13.686151  247213 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-500581"
	I1228 06:56:13.686203  247213 host.go:66] Checking if "default-k8s-diff-port-500581" exists ...
	I1228 06:56:13.686699  247213 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-500581 --format={{.State.Status}}
	I1228 06:56:13.718321  247213 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1228 06:56:13.718423  247213 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1228 06:56:13.718565  247213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-500581
	I1228 06:56:13.728024  247213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/default-k8s-diff-port-500581/id_rsa Username:docker}
	I1228 06:56:13.751115  247213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/default-k8s-diff-port-500581/id_rsa Username:docker}
	I1228 06:56:13.767540  247213 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1228 06:56:13.826652  247213 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:56:13.845102  247213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:56:13.860783  247213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1228 06:56:13.971728  247213 start.go:987] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1228 06:56:13.973616  247213 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-500581" to be "Ready" ...
	I1228 06:56:14.185139  247213 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1228 06:56:09.949330  252331 out.go:252] * Restarting existing docker container for "no-preload-950460" ...
	I1228 06:56:09.949409  252331 cli_runner.go:164] Run: docker start no-preload-950460
	I1228 06:56:10.304369  252331 cli_runner.go:164] Run: docker container inspect no-preload-950460 --format={{.State.Status}}
	I1228 06:56:10.333247  252331 kic.go:430] container "no-preload-950460" state is running.
	I1228 06:56:10.333791  252331 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-950460
	I1228 06:56:10.362343  252331 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/config.json ...
	I1228 06:56:10.362749  252331 machine.go:94] provisionDockerMachine start ...
	I1228 06:56:10.362898  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:10.399401  252331 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:10.400763  252331 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1228 06:56:10.400782  252331 main.go:144] libmachine: About to run SSH command:
	hostname
	I1228 06:56:10.401698  252331 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42784->127.0.0.1:33078: read: connection reset by peer
	I1228 06:56:13.530578  252331 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-950460
	
	I1228 06:56:13.530607  252331 ubuntu.go:182] provisioning hostname "no-preload-950460"
	I1228 06:56:13.530671  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:13.551523  252331 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:13.551766  252331 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1228 06:56:13.551782  252331 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-950460 && echo "no-preload-950460" | sudo tee /etc/hostname
	I1228 06:56:13.697078  252331 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-950460
	
	I1228 06:56:13.697213  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:13.734170  252331 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:13.734651  252331 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1228 06:56:13.734718  252331 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-950460' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-950460/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-950460' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1228 06:56:13.876570  252331 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1228 06:56:13.876646  252331 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22352-5550/.minikube CaCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22352-5550/.minikube}
	I1228 06:56:13.878995  252331 ubuntu.go:190] setting up certificates
	I1228 06:56:13.879017  252331 provision.go:84] configureAuth start
	I1228 06:56:13.879096  252331 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-950460
	I1228 06:56:13.902076  252331 provision.go:143] copyHostCerts
	I1228 06:56:13.902141  252331 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem, removing ...
	I1228 06:56:13.902162  252331 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem
	I1228 06:56:13.902253  252331 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem (1082 bytes)
	I1228 06:56:13.902388  252331 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem, removing ...
	I1228 06:56:13.902401  252331 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem
	I1228 06:56:13.902438  252331 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem (1123 bytes)
	I1228 06:56:13.902511  252331 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem, removing ...
	I1228 06:56:13.902520  252331 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem
	I1228 06:56:13.902560  252331 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem (1679 bytes)
	I1228 06:56:13.902624  252331 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem org=jenkins.no-preload-950460 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-950460]
	I1228 06:56:14.048352  252331 provision.go:177] copyRemoteCerts
	I1228 06:56:14.048419  252331 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1228 06:56:14.048452  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:14.068611  252331 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/no-preload-950460/id_rsa Username:docker}
	I1228 06:56:14.168261  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1228 06:56:14.190018  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1228 06:56:14.208765  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1228 06:56:14.226610  252331 provision.go:87] duration metric: took 347.581995ms to configureAuth
	I1228 06:56:14.226635  252331 ubuntu.go:206] setting minikube options for container-runtime
	I1228 06:56:14.226812  252331 config.go:182] Loaded profile config "no-preload-950460": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:56:14.226900  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:14.244598  252331 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:14.244866  252331 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1228 06:56:14.244892  252331 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1228 06:56:12.253209  242715 pod_ready.go:104] pod "coredns-5dd5756b68-f75js" is not "Ready", error: <nil>
	W1228 06:56:14.796990  242715 pod_ready.go:104] pod "coredns-5dd5756b68-f75js" is not "Ready", error: <nil>
	I1228 06:56:15.100866  252331 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1228 06:56:15.100892  252331 machine.go:97] duration metric: took 4.738124144s to provisionDockerMachine
	I1228 06:56:15.100904  252331 start.go:293] postStartSetup for "no-preload-950460" (driver="docker")
	I1228 06:56:15.100918  252331 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1228 06:56:15.101012  252331 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1228 06:56:15.101073  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:15.125860  252331 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/no-preload-950460/id_rsa Username:docker}
	I1228 06:56:15.230154  252331 ssh_runner.go:195] Run: cat /etc/os-release
	I1228 06:56:15.234858  252331 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1228 06:56:15.234891  252331 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1228 06:56:15.234905  252331 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-5550/.minikube/addons for local assets ...
	I1228 06:56:15.234956  252331 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-5550/.minikube/files for local assets ...
	I1228 06:56:15.235108  252331 filesync.go:149] local asset: /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem -> 90762.pem in /etc/ssl/certs
	I1228 06:56:15.235252  252331 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1228 06:56:15.245155  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem --> /etc/ssl/certs/90762.pem (1708 bytes)
	I1228 06:56:15.268602  252331 start.go:296] duration metric: took 167.682246ms for postStartSetup
	I1228 06:56:15.268700  252331 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 06:56:15.268759  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:15.288607  252331 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/no-preload-950460/id_rsa Username:docker}
	I1228 06:56:15.381324  252331 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1228 06:56:15.386166  252331 fix.go:56] duration metric: took 5.460557205s for fixHost
	I1228 06:56:15.386193  252331 start.go:83] releasing machines lock for "no-preload-950460", held for 5.460617152s
	I1228 06:56:15.386267  252331 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-950460
	I1228 06:56:15.405738  252331 ssh_runner.go:195] Run: cat /version.json
	I1228 06:56:15.405806  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:15.405845  252331 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1228 06:56:15.405936  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:15.426086  252331 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/no-preload-950460/id_rsa Username:docker}
	I1228 06:56:15.426572  252331 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/no-preload-950460/id_rsa Username:docker}
	I1228 06:56:15.573340  252331 ssh_runner.go:195] Run: systemctl --version
	I1228 06:56:15.580022  252331 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1228 06:56:15.614860  252331 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1228 06:56:15.619799  252331 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1228 06:56:15.619859  252331 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1228 06:56:15.627841  252331 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1228 06:56:15.627863  252331 start.go:496] detecting cgroup driver to use...
	I1228 06:56:15.627897  252331 detect.go:190] detected "systemd" cgroup driver on host os
	I1228 06:56:15.627935  252331 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1228 06:56:15.643627  252331 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1228 06:56:15.656486  252331 docker.go:218] disabling cri-docker service (if available) ...
	I1228 06:56:15.656542  252331 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1228 06:56:15.670796  252331 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1228 06:56:15.683099  252331 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1228 06:56:15.763732  252331 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1228 06:56:15.846193  252331 docker.go:234] disabling docker service ...
	I1228 06:56:15.846248  252331 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1228 06:56:15.860365  252331 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1228 06:56:15.872316  252331 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1228 06:56:15.952498  252331 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1228 06:56:16.036768  252331 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1228 06:56:16.048883  252331 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 06:56:16.062667  252331 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1228 06:56:16.062719  252331 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:16.072039  252331 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1228 06:56:16.072100  252331 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:16.080521  252331 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:16.089148  252331 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:16.097405  252331 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1228 06:56:16.105158  252331 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:16.113413  252331 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:16.122659  252331 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:16.131327  252331 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1228 06:56:16.138849  252331 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1228 06:56:16.145687  252331 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:56:16.222679  252331 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1228 06:56:16.520445  252331 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1228 06:56:16.520595  252331 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1228 06:56:16.524711  252331 start.go:574] Will wait 60s for crictl version
	I1228 06:56:16.524766  252331 ssh_runner.go:195] Run: which crictl
	I1228 06:56:16.528189  252331 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1228 06:56:16.553043  252331 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1228 06:56:16.553151  252331 ssh_runner.go:195] Run: crio --version
	I1228 06:56:16.580248  252331 ssh_runner.go:195] Run: crio --version
	I1228 06:56:16.608534  252331 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1228 06:56:14.186403  247213 addons.go:530] duration metric: took 530.739381ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1228 06:56:14.479845  247213 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-500581" context rescaled to 1 replicas
	W1228 06:56:15.976454  247213 node_ready.go:57] node "default-k8s-diff-port-500581" has "Ready":"False" status (will retry)
	I1228 06:56:16.609592  252331 cli_runner.go:164] Run: docker network inspect no-preload-950460 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 06:56:16.626775  252331 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1228 06:56:16.630900  252331 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 06:56:16.641409  252331 kubeadm.go:884] updating cluster {Name:no-preload-950460 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-950460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 06:56:16.641518  252331 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1228 06:56:16.641556  252331 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 06:56:16.675102  252331 crio.go:631] all images are preloaded for cri-o runtime.
	I1228 06:56:16.675123  252331 cache_images.go:86] Images are preloaded, skipping loading
	I1228 06:56:16.675129  252331 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0 crio true true} ...
	I1228 06:56:16.675244  252331 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-950460 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:no-preload-950460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 06:56:16.675331  252331 ssh_runner.go:195] Run: crio config
	I1228 06:56:16.718702  252331 cni.go:84] Creating CNI manager for ""
	I1228 06:56:16.718733  252331 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1228 06:56:16.718752  252331 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1228 06:56:16.718789  252331 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-950460 NodeName:no-preload-950460 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 06:56:16.718988  252331 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-950460"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 06:56:16.719070  252331 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1228 06:56:16.727836  252331 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 06:56:16.727925  252331 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 06:56:16.735688  252331 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1228 06:56:16.748533  252331 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 06:56:16.761180  252331 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1228 06:56:16.774346  252331 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1228 06:56:16.777963  252331 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 06:56:16.787778  252331 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:56:16.870258  252331 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:56:16.897229  252331 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460 for IP: 192.168.94.2
	I1228 06:56:16.897252  252331 certs.go:195] generating shared ca certs ...
	I1228 06:56:16.897273  252331 certs.go:227] acquiring lock for ca certs: {Name:mk77ee411d20e2d367f536371cb4debf1ce5f664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:16.897417  252331 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-5550/.minikube/ca.key
	I1228 06:56:16.897469  252331 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.key
	I1228 06:56:16.897483  252331 certs.go:257] generating profile certs ...
	I1228 06:56:16.897565  252331 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/client.key
	I1228 06:56:16.897621  252331 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/apiserver.key.3468f947
	I1228 06:56:16.897659  252331 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/proxy-client.key
	I1228 06:56:16.897752  252331 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076.pem (1338 bytes)
	W1228 06:56:16.897786  252331 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076_empty.pem, impossibly tiny 0 bytes
	I1228 06:56:16.897800  252331 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem (1675 bytes)
	I1228 06:56:16.897832  252331 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem (1082 bytes)
	I1228 06:56:16.897861  252331 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem (1123 bytes)
	I1228 06:56:16.897894  252331 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem (1679 bytes)
	I1228 06:56:16.897943  252331 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem (1708 bytes)
	I1228 06:56:16.898713  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 06:56:16.917010  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1228 06:56:16.936367  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 06:56:16.957237  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 06:56:16.980495  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1228 06:56:16.998372  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1228 06:56:17.015059  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 06:56:17.031891  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/no-preload-950460/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1228 06:56:17.049280  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem --> /usr/share/ca-certificates/90762.pem (1708 bytes)
	I1228 06:56:17.065663  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 06:56:17.082832  252331 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076.pem --> /usr/share/ca-certificates/9076.pem (1338 bytes)
	I1228 06:56:17.100902  252331 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 06:56:17.113166  252331 ssh_runner.go:195] Run: openssl version
	I1228 06:56:17.119103  252331 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:17.126689  252331 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 06:56:17.134233  252331 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:17.137970  252331 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:28 /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:17.138010  252331 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:17.174376  252331 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 06:56:17.182094  252331 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9076.pem
	I1228 06:56:17.189546  252331 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9076.pem /etc/ssl/certs/9076.pem
	I1228 06:56:17.196673  252331 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9076.pem
	I1228 06:56:17.200312  252331 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:31 /usr/share/ca-certificates/9076.pem
	I1228 06:56:17.200355  252331 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9076.pem
	I1228 06:56:17.235404  252331 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 06:56:17.243056  252331 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/90762.pem
	I1228 06:56:17.251423  252331 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/90762.pem /etc/ssl/certs/90762.pem
	I1228 06:56:17.259118  252331 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/90762.pem
	I1228 06:56:17.262689  252331 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:31 /usr/share/ca-certificates/90762.pem
	I1228 06:56:17.262740  252331 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/90762.pem
	I1228 06:56:17.298353  252331 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 06:56:17.306420  252331 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 06:56:17.310366  252331 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1228 06:56:17.344608  252331 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1228 06:56:17.380698  252331 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1228 06:56:17.426014  252331 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1228 06:56:17.474223  252331 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1228 06:56:17.531854  252331 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1228 06:56:17.577281  252331 kubeadm.go:401] StartCluster: {Name:no-preload-950460 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-950460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:56:17.577434  252331 ssh_runner.go:195] Run: sudo crio config
	I1228 06:56:17.636151  252331 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	W1228 06:56:17.648977  252331 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:17Z" level=error msg="open /run/runc: no such file or directory"
	I1228 06:56:17.649067  252331 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 06:56:17.657728  252331 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1228 06:56:17.657748  252331 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1228 06:56:17.657796  252331 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1228 06:56:17.666778  252331 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1228 06:56:17.668081  252331 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-950460" does not appear in /home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:56:17.668996  252331 kubeconfig.go:62] /home/jenkins/minikube-integration/22352-5550/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-950460" cluster setting kubeconfig missing "no-preload-950460" context setting]
	I1228 06:56:17.670453  252331 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/kubeconfig: {Name:mke0b45a06b272ee5aa493b62cdbe6f2a53c0aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:17.672683  252331 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1228 06:56:17.683544  252331 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1228 06:56:17.683585  252331 kubeadm.go:602] duration metric: took 25.829752ms to restartPrimaryControlPlane
	I1228 06:56:17.683596  252331 kubeadm.go:403] duration metric: took 106.327386ms to StartCluster
	I1228 06:56:17.683615  252331 settings.go:142] acquiring lock: {Name:mk84c1d4c127eaf11c7cc5cc16de86f86f2dcbe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:17.683665  252331 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:56:17.686260  252331 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/kubeconfig: {Name:mke0b45a06b272ee5aa493b62cdbe6f2a53c0aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:17.686556  252331 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1228 06:56:17.686676  252331 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1228 06:56:17.686779  252331 addons.go:70] Setting storage-provisioner=true in profile "no-preload-950460"
	I1228 06:56:17.686790  252331 config.go:182] Loaded profile config "no-preload-950460": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:56:17.686794  252331 addons.go:239] Setting addon storage-provisioner=true in "no-preload-950460"
	W1228 06:56:17.686802  252331 addons.go:248] addon storage-provisioner should already be in state true
	I1228 06:56:17.686829  252331 host.go:66] Checking if "no-preload-950460" exists ...
	I1228 06:56:17.686834  252331 addons.go:70] Setting default-storageclass=true in profile "no-preload-950460"
	I1228 06:56:17.686838  252331 addons.go:70] Setting dashboard=true in profile "no-preload-950460"
	I1228 06:56:17.686865  252331 addons.go:239] Setting addon dashboard=true in "no-preload-950460"
	W1228 06:56:17.686879  252331 addons.go:248] addon dashboard should already be in state true
	I1228 06:56:17.686912  252331 host.go:66] Checking if "no-preload-950460" exists ...
	I1228 06:56:17.686847  252331 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-950460"
	I1228 06:56:17.687329  252331 cli_runner.go:164] Run: docker container inspect no-preload-950460 --format={{.State.Status}}
	I1228 06:56:17.687415  252331 cli_runner.go:164] Run: docker container inspect no-preload-950460 --format={{.State.Status}}
	I1228 06:56:17.687330  252331 cli_runner.go:164] Run: docker container inspect no-preload-950460 --format={{.State.Status}}
	I1228 06:56:17.689184  252331 out.go:179] * Verifying Kubernetes components...
	I1228 06:56:17.690310  252331 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:56:17.712805  252331 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1228 06:56:17.713229  252331 addons.go:239] Setting addon default-storageclass=true in "no-preload-950460"
	W1228 06:56:17.713248  252331 addons.go:248] addon default-storageclass should already be in state true
	I1228 06:56:17.713270  252331 host.go:66] Checking if "no-preload-950460" exists ...
	I1228 06:56:17.713562  252331 cli_runner.go:164] Run: docker container inspect no-preload-950460 --format={{.State.Status}}
	I1228 06:56:17.713731  252331 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1228 06:56:17.713774  252331 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:56:17.713791  252331 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1228 06:56:17.713835  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:17.715782  252331 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1228 06:56:15.089728  243963 node_ready.go:57] node "embed-certs-422591" has "Ready":"False" status (will retry)
	W1228 06:56:17.589238  243963 node_ready.go:57] node "embed-certs-422591" has "Ready":"False" status (will retry)
	I1228 06:56:17.716776  252331 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1228 06:56:17.716793  252331 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1228 06:56:17.716846  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:17.737306  252331 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1228 06:56:17.737329  252331 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1228 06:56:17.737387  252331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:56:17.747296  252331 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/no-preload-950460/id_rsa Username:docker}
	I1228 06:56:17.752550  252331 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/no-preload-950460/id_rsa Username:docker}
	I1228 06:56:17.763145  252331 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/no-preload-950460/id_rsa Username:docker}
	I1228 06:56:17.827637  252331 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:56:17.841176  252331 node_ready.go:35] waiting up to 6m0s for node "no-preload-950460" to be "Ready" ...
	I1228 06:56:17.852679  252331 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:56:17.859387  252331 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1228 06:56:17.859413  252331 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1228 06:56:17.870358  252331 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1228 06:56:17.876579  252331 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1228 06:56:17.876626  252331 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1228 06:56:17.892110  252331 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1228 06:56:17.892137  252331 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1228 06:56:17.907110  252331 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1228 06:56:17.907153  252331 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1228 06:56:17.921175  252331 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1228 06:56:17.921199  252331 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1228 06:56:17.934592  252331 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1228 06:56:17.934610  252331 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1228 06:56:17.946620  252331 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1228 06:56:17.946645  252331 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1228 06:56:17.958616  252331 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1228 06:56:17.958637  252331 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1228 06:56:17.971511  252331 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 06:56:17.971531  252331 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1228 06:56:17.984466  252331 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 06:56:19.111197  252331 node_ready.go:49] node "no-preload-950460" is "Ready"
	I1228 06:56:19.111234  252331 node_ready.go:38] duration metric: took 1.270013468s for node "no-preload-950460" to be "Ready" ...
	I1228 06:56:19.111250  252331 api_server.go:52] waiting for apiserver process to appear ...
	I1228 06:56:19.111303  252331 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 06:56:19.644061  252331 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.791326834s)
	I1228 06:56:19.644127  252331 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.773734972s)
	I1228 06:56:19.644217  252331 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.659719643s)
	I1228 06:56:19.644238  252331 api_server.go:72] duration metric: took 1.957648252s to wait for apiserver process to appear ...
	I1228 06:56:19.644247  252331 api_server.go:88] waiting for apiserver healthz status ...
	I1228 06:56:19.644265  252331 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1228 06:56:19.646079  252331 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-950460 addons enable metrics-server
	
	I1228 06:56:19.648689  252331 api_server.go:325] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1228 06:56:19.648710  252331 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1228 06:56:19.652919  252331 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1228 06:56:19.654055  252331 addons.go:530] duration metric: took 1.967385599s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	W1228 06:56:17.252978  242715 pod_ready.go:104] pod "coredns-5dd5756b68-f75js" is not "Ready", error: <nil>
	W1228 06:56:19.752632  242715 pod_ready.go:104] pod "coredns-5dd5756b68-f75js" is not "Ready", error: <nil>
	W1228 06:56:17.976710  247213 node_ready.go:57] node "default-k8s-diff-port-500581" has "Ready":"False" status (will retry)
	W1228 06:56:20.476521  247213 node_ready.go:57] node "default-k8s-diff-port-500581" has "Ready":"False" status (will retry)
	W1228 06:56:20.089066  243963 node_ready.go:57] node "embed-certs-422591" has "Ready":"False" status (will retry)
	W1228 06:56:22.089199  243963 node_ready.go:57] node "embed-certs-422591" has "Ready":"False" status (will retry)
	I1228 06:56:23.089137  243963 node_ready.go:49] node "embed-certs-422591" is "Ready"
	I1228 06:56:23.089171  243963 node_ready.go:38] duration metric: took 12.00330569s for node "embed-certs-422591" to be "Ready" ...
	I1228 06:56:23.089188  243963 api_server.go:52] waiting for apiserver process to appear ...
	I1228 06:56:23.089247  243963 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 06:56:23.109640  243963 api_server.go:72] duration metric: took 12.740459175s to wait for apiserver process to appear ...
	I1228 06:56:23.109670  243963 api_server.go:88] waiting for apiserver healthz status ...
	I1228 06:56:23.109691  243963 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1228 06:56:23.115347  243963 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1228 06:56:23.116388  243963 api_server.go:141] control plane version: v1.35.0
	I1228 06:56:23.116413  243963 api_server.go:131] duration metric: took 6.736322ms to wait for apiserver health ...
	I1228 06:56:23.116422  243963 system_pods.go:43] waiting for kube-system pods to appear ...
	I1228 06:56:23.120151  243963 system_pods.go:59] 8 kube-system pods found
	I1228 06:56:23.120183  243963 system_pods.go:61] "coredns-7d764666f9-dmhdv" [73a84260-cf19-47c9-a23e-616f99cb5f38] Pending
	I1228 06:56:23.120191  243963 system_pods.go:61] "etcd-embed-certs-422591" [fa26dd24-514f-4ada-b710-7e65e52ccd9e] Running
	I1228 06:56:23.120197  243963 system_pods.go:61] "kindnet-9zxtp" [e5bd678d-7d09-455f-993e-2fb6a4f02111] Running
	I1228 06:56:23.120217  243963 system_pods.go:61] "kube-apiserver-embed-certs-422591" [935ffe6b-7ba7-4580-b22b-b76cd5469ae8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 06:56:23.120229  243963 system_pods.go:61] "kube-controller-manager-embed-certs-422591" [39f6d823-b0f9-44c0-be26-47e85a60ca63] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 06:56:23.120236  243963 system_pods.go:61] "kube-proxy-j2dkd" [f64a0ce0-4a8f-4d7d-aac7-cce99ec2bdd8] Running
	I1228 06:56:23.120242  243963 system_pods.go:61] "kube-scheduler-embed-certs-422591" [ba6d5c13-ea17-4f1d-bcc9-c57e4e7ce995] Running
	I1228 06:56:23.120247  243963 system_pods.go:61] "storage-provisioner" [ac0163fe-8dd0-4650-a401-22a9a9310b5e] Pending
	I1228 06:56:23.120255  243963 system_pods.go:74] duration metric: took 3.827732ms to wait for pod list to return data ...
	I1228 06:56:23.120267  243963 default_sa.go:34] waiting for default service account to be created ...
	I1228 06:56:23.122455  243963 default_sa.go:45] found service account: "default"
	I1228 06:56:23.122484  243963 default_sa.go:55] duration metric: took 2.209324ms for default service account to be created ...
	I1228 06:56:23.122495  243963 system_pods.go:116] waiting for k8s-apps to be running ...
	I1228 06:56:23.125732  243963 system_pods.go:86] 8 kube-system pods found
	I1228 06:56:23.125761  243963 system_pods.go:89] "coredns-7d764666f9-dmhdv" [73a84260-cf19-47c9-a23e-616f99cb5f38] Pending
	I1228 06:56:23.125768  243963 system_pods.go:89] "etcd-embed-certs-422591" [fa26dd24-514f-4ada-b710-7e65e52ccd9e] Running
	I1228 06:56:23.125774  243963 system_pods.go:89] "kindnet-9zxtp" [e5bd678d-7d09-455f-993e-2fb6a4f02111] Running
	I1228 06:56:23.125782  243963 system_pods.go:89] "kube-apiserver-embed-certs-422591" [935ffe6b-7ba7-4580-b22b-b76cd5469ae8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 06:56:23.125798  243963 system_pods.go:89] "kube-controller-manager-embed-certs-422591" [39f6d823-b0f9-44c0-be26-47e85a60ca63] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 06:56:23.125806  243963 system_pods.go:89] "kube-proxy-j2dkd" [f64a0ce0-4a8f-4d7d-aac7-cce99ec2bdd8] Running
	I1228 06:56:23.125812  243963 system_pods.go:89] "kube-scheduler-embed-certs-422591" [ba6d5c13-ea17-4f1d-bcc9-c57e4e7ce995] Running
	I1228 06:56:23.125821  243963 system_pods.go:89] "storage-provisioner" [ac0163fe-8dd0-4650-a401-22a9a9310b5e] Pending
	I1228 06:56:23.125858  243963 retry.go:84] will retry after 300ms: missing components: kube-dns
	I1228 06:56:23.380969  243963 system_pods.go:86] 8 kube-system pods found
	I1228 06:56:23.381005  243963 system_pods.go:89] "coredns-7d764666f9-dmhdv" [73a84260-cf19-47c9-a23e-616f99cb5f38] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:56:23.381014  243963 system_pods.go:89] "etcd-embed-certs-422591" [fa26dd24-514f-4ada-b710-7e65e52ccd9e] Running
	I1228 06:56:23.381023  243963 system_pods.go:89] "kindnet-9zxtp" [e5bd678d-7d09-455f-993e-2fb6a4f02111] Running
	I1228 06:56:23.381042  243963 system_pods.go:89] "kube-apiserver-embed-certs-422591" [935ffe6b-7ba7-4580-b22b-b76cd5469ae8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 06:56:23.381051  243963 system_pods.go:89] "kube-controller-manager-embed-certs-422591" [39f6d823-b0f9-44c0-be26-47e85a60ca63] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 06:56:23.381057  243963 system_pods.go:89] "kube-proxy-j2dkd" [f64a0ce0-4a8f-4d7d-aac7-cce99ec2bdd8] Running
	I1228 06:56:23.381067  243963 system_pods.go:89] "kube-scheduler-embed-certs-422591" [ba6d5c13-ea17-4f1d-bcc9-c57e4e7ce995] Running
	I1228 06:56:23.381075  243963 system_pods.go:89] "storage-provisioner" [ac0163fe-8dd0-4650-a401-22a9a9310b5e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:56:23.736873  243963 system_pods.go:86] 8 kube-system pods found
	I1228 06:56:23.736924  243963 system_pods.go:89] "coredns-7d764666f9-dmhdv" [73a84260-cf19-47c9-a23e-616f99cb5f38] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:56:23.736933  243963 system_pods.go:89] "etcd-embed-certs-422591" [fa26dd24-514f-4ada-b710-7e65e52ccd9e] Running
	I1228 06:56:23.736942  243963 system_pods.go:89] "kindnet-9zxtp" [e5bd678d-7d09-455f-993e-2fb6a4f02111] Running
	I1228 06:56:23.736955  243963 system_pods.go:89] "kube-apiserver-embed-certs-422591" [935ffe6b-7ba7-4580-b22b-b76cd5469ae8] Running
	I1228 06:56:23.736965  243963 system_pods.go:89] "kube-controller-manager-embed-certs-422591" [39f6d823-b0f9-44c0-be26-47e85a60ca63] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 06:56:23.736971  243963 system_pods.go:89] "kube-proxy-j2dkd" [f64a0ce0-4a8f-4d7d-aac7-cce99ec2bdd8] Running
	I1228 06:56:23.736990  243963 system_pods.go:89] "kube-scheduler-embed-certs-422591" [ba6d5c13-ea17-4f1d-bcc9-c57e4e7ce995] Running
	I1228 06:56:23.737002  243963 system_pods.go:89] "storage-provisioner" [ac0163fe-8dd0-4650-a401-22a9a9310b5e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:56:24.078656  243963 system_pods.go:86] 8 kube-system pods found
	I1228 06:56:24.078690  243963 system_pods.go:89] "coredns-7d764666f9-dmhdv" [73a84260-cf19-47c9-a23e-616f99cb5f38] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:56:24.078696  243963 system_pods.go:89] "etcd-embed-certs-422591" [fa26dd24-514f-4ada-b710-7e65e52ccd9e] Running
	I1228 06:56:24.078700  243963 system_pods.go:89] "kindnet-9zxtp" [e5bd678d-7d09-455f-993e-2fb6a4f02111] Running
	I1228 06:56:24.078704  243963 system_pods.go:89] "kube-apiserver-embed-certs-422591" [935ffe6b-7ba7-4580-b22b-b76cd5469ae8] Running
	I1228 06:56:24.078709  243963 system_pods.go:89] "kube-controller-manager-embed-certs-422591" [39f6d823-b0f9-44c0-be26-47e85a60ca63] Running
	I1228 06:56:24.078712  243963 system_pods.go:89] "kube-proxy-j2dkd" [f64a0ce0-4a8f-4d7d-aac7-cce99ec2bdd8] Running
	I1228 06:56:24.078715  243963 system_pods.go:89] "kube-scheduler-embed-certs-422591" [ba6d5c13-ea17-4f1d-bcc9-c57e4e7ce995] Running
	I1228 06:56:24.078721  243963 system_pods.go:89] "storage-provisioner" [ac0163fe-8dd0-4650-a401-22a9a9310b5e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:56:20.144322  252331 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1228 06:56:20.148700  252331 api_server.go:325] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1228 06:56:20.148728  252331 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1228 06:56:20.644327  252331 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1228 06:56:20.648377  252331 api_server.go:325] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1228 06:56:20.649429  252331 api_server.go:141] control plane version: v1.35.0
	I1228 06:56:20.649449  252331 api_server.go:131] duration metric: took 1.005195846s to wait for apiserver health ...
	I1228 06:56:20.649458  252331 system_pods.go:43] waiting for kube-system pods to appear ...
	I1228 06:56:20.652593  252331 system_pods.go:59] 8 kube-system pods found
	I1228 06:56:20.652630  252331 system_pods.go:61] "coredns-7d764666f9-npk6g" [a3cc436b-e460-483e-99aa-f7d44599d666] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:56:20.652637  252331 system_pods.go:61] "etcd-no-preload-950460" [61fd908c-4329-4432-82b2-80206bbbb703] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 06:56:20.652644  252331 system_pods.go:61] "kindnet-xhb7x" [4bab0d9b-3499-4546-bb8c-e47bfc17dbbf] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 06:56:20.652653  252331 system_pods.go:61] "kube-apiserver-no-preload-950460" [2aeafb60-9003-44c3-b5cb-960dd4a668c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 06:56:20.652667  252331 system_pods.go:61] "kube-controller-manager-no-preload-950460" [b38f2ea3-71b8-45e0-9c27-eb7fddfc67a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 06:56:20.652675  252331 system_pods.go:61] "kube-proxy-294rn" [c88bb406-588c-45ec-9225-946af7327ec0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 06:56:20.652686  252331 system_pods.go:61] "kube-scheduler-no-preload-950460" [24b95531-e1d2-47ff-abd3-70d0cdab9fe4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 06:56:20.652694  252331 system_pods.go:61] "storage-provisioner" [a4076523-c034-4331-8dd7-a506e9dec2d9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:56:20.652703  252331 system_pods.go:74] duration metric: took 3.239436ms to wait for pod list to return data ...
	I1228 06:56:20.652715  252331 default_sa.go:34] waiting for default service account to be created ...
	I1228 06:56:20.654840  252331 default_sa.go:45] found service account: "default"
	I1228 06:56:20.654856  252331 default_sa.go:55] duration metric: took 2.135398ms for default service account to be created ...
	I1228 06:56:20.654863  252331 system_pods.go:116] waiting for k8s-apps to be running ...
	I1228 06:56:20.656911  252331 system_pods.go:86] 8 kube-system pods found
	I1228 06:56:20.656935  252331 system_pods.go:89] "coredns-7d764666f9-npk6g" [a3cc436b-e460-483e-99aa-f7d44599d666] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:56:20.656943  252331 system_pods.go:89] "etcd-no-preload-950460" [61fd908c-4329-4432-82b2-80206bbbb703] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 06:56:20.656950  252331 system_pods.go:89] "kindnet-xhb7x" [4bab0d9b-3499-4546-bb8c-e47bfc17dbbf] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 06:56:20.656955  252331 system_pods.go:89] "kube-apiserver-no-preload-950460" [2aeafb60-9003-44c3-b5cb-960dd4a668c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 06:56:20.656961  252331 system_pods.go:89] "kube-controller-manager-no-preload-950460" [b38f2ea3-71b8-45e0-9c27-eb7fddfc67a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 06:56:20.656969  252331 system_pods.go:89] "kube-proxy-294rn" [c88bb406-588c-45ec-9225-946af7327ec0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 06:56:20.656974  252331 system_pods.go:89] "kube-scheduler-no-preload-950460" [24b95531-e1d2-47ff-abd3-70d0cdab9fe4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 06:56:20.656979  252331 system_pods.go:89] "storage-provisioner" [a4076523-c034-4331-8dd7-a506e9dec2d9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:56:20.656988  252331 system_pods.go:126] duration metric: took 2.120486ms to wait for k8s-apps to be running ...
	I1228 06:56:20.656995  252331 system_svc.go:44] waiting for kubelet service to be running ....
	I1228 06:56:20.657051  252331 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:56:20.671024  252331 system_svc.go:56] duration metric: took 14.023192ms WaitForService to wait for kubelet
	I1228 06:56:20.671072  252331 kubeadm.go:587] duration metric: took 2.984480725s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 06:56:20.671093  252331 node_conditions.go:102] verifying NodePressure condition ...
	I1228 06:56:20.673706  252331 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1228 06:56:20.673727  252331 node_conditions.go:123] node cpu capacity is 8
	I1228 06:56:20.673740  252331 node_conditions.go:105] duration metric: took 2.643602ms to run NodePressure ...
	I1228 06:56:20.673752  252331 start.go:242] waiting for startup goroutines ...
	I1228 06:56:20.673758  252331 start.go:247] waiting for cluster config update ...
	I1228 06:56:20.673773  252331 start.go:256] writing updated cluster config ...
	I1228 06:56:20.674067  252331 ssh_runner.go:195] Run: rm -f paused
	I1228 06:56:20.677778  252331 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 06:56:20.681121  252331 pod_ready.go:83] waiting for pod "coredns-7d764666f9-npk6g" in "kube-system" namespace to be "Ready" or be gone ...
	W1228 06:56:22.686104  252331 pod_ready.go:104] pod "coredns-7d764666f9-npk6g" is not "Ready", error: <nil>
	W1228 06:56:22.251764  242715 pod_ready.go:104] pod "coredns-5dd5756b68-f75js" is not "Ready", error: <nil>
	W1228 06:56:24.253072  242715 pod_ready.go:104] pod "coredns-5dd5756b68-f75js" is not "Ready", error: <nil>
	I1228 06:56:24.497471  243963 system_pods.go:86] 8 kube-system pods found
	I1228 06:56:24.497502  243963 system_pods.go:89] "coredns-7d764666f9-dmhdv" [73a84260-cf19-47c9-a23e-616f99cb5f38] Running
	I1228 06:56:24.497510  243963 system_pods.go:89] "etcd-embed-certs-422591" [fa26dd24-514f-4ada-b710-7e65e52ccd9e] Running
	I1228 06:56:24.497516  243963 system_pods.go:89] "kindnet-9zxtp" [e5bd678d-7d09-455f-993e-2fb6a4f02111] Running
	I1228 06:56:24.497521  243963 system_pods.go:89] "kube-apiserver-embed-certs-422591" [935ffe6b-7ba7-4580-b22b-b76cd5469ae8] Running
	I1228 06:56:24.497528  243963 system_pods.go:89] "kube-controller-manager-embed-certs-422591" [39f6d823-b0f9-44c0-be26-47e85a60ca63] Running
	I1228 06:56:24.497533  243963 system_pods.go:89] "kube-proxy-j2dkd" [f64a0ce0-4a8f-4d7d-aac7-cce99ec2bdd8] Running
	I1228 06:56:24.497539  243963 system_pods.go:89] "kube-scheduler-embed-certs-422591" [ba6d5c13-ea17-4f1d-bcc9-c57e4e7ce995] Running
	I1228 06:56:24.497545  243963 system_pods.go:89] "storage-provisioner" [ac0163fe-8dd0-4650-a401-22a9a9310b5e] Running
	I1228 06:56:24.497556  243963 system_pods.go:126] duration metric: took 1.375053604s to wait for k8s-apps to be running ...
	I1228 06:56:24.497578  243963 system_svc.go:44] waiting for kubelet service to be running ....
	I1228 06:56:24.497628  243963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:56:24.514567  243963 system_svc.go:56] duration metric: took 16.979492ms WaitForService to wait for kubelet
	I1228 06:56:24.514605  243963 kubeadm.go:587] duration metric: took 14.145429952s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 06:56:24.514629  243963 node_conditions.go:102] verifying NodePressure condition ...
	I1228 06:56:24.518108  243963 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1228 06:56:24.518140  243963 node_conditions.go:123] node cpu capacity is 8
	I1228 06:56:24.518158  243963 node_conditions.go:105] duration metric: took 3.522325ms to run NodePressure ...
	I1228 06:56:24.518177  243963 start.go:242] waiting for startup goroutines ...
	I1228 06:56:24.518186  243963 start.go:247] waiting for cluster config update ...
	I1228 06:56:24.518200  243963 start.go:256] writing updated cluster config ...
	I1228 06:56:24.518505  243963 ssh_runner.go:195] Run: rm -f paused
	I1228 06:56:24.523480  243963 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 06:56:24.528339  243963 pod_ready.go:83] waiting for pod "coredns-7d764666f9-dmhdv" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:24.533314  243963 pod_ready.go:94] pod "coredns-7d764666f9-dmhdv" is "Ready"
	I1228 06:56:24.533340  243963 pod_ready.go:86] duration metric: took 4.973959ms for pod "coredns-7d764666f9-dmhdv" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:24.535652  243963 pod_ready.go:83] waiting for pod "etcd-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:24.540088  243963 pod_ready.go:94] pod "etcd-embed-certs-422591" is "Ready"
	I1228 06:56:24.540118  243963 pod_ready.go:86] duration metric: took 4.440493ms for pod "etcd-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:24.542361  243963 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:24.546378  243963 pod_ready.go:94] pod "kube-apiserver-embed-certs-422591" is "Ready"
	I1228 06:56:24.546401  243963 pod_ready.go:86] duration metric: took 4.016397ms for pod "kube-apiserver-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:24.548746  243963 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:24.928795  243963 pod_ready.go:94] pod "kube-controller-manager-embed-certs-422591" is "Ready"
	I1228 06:56:24.928827  243963 pod_ready.go:86] duration metric: took 380.060187ms for pod "kube-controller-manager-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:25.129424  243963 pod_ready.go:83] waiting for pod "kube-proxy-j2dkd" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:25.528796  243963 pod_ready.go:94] pod "kube-proxy-j2dkd" is "Ready"
	I1228 06:56:25.528829  243963 pod_ready.go:86] duration metric: took 399.379664ms for pod "kube-proxy-j2dkd" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:25.728149  243963 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:26.129240  243963 pod_ready.go:94] pod "kube-scheduler-embed-certs-422591" is "Ready"
	I1228 06:56:26.129352  243963 pod_ready.go:86] duration metric: took 401.16633ms for pod "kube-scheduler-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:26.129383  243963 pod_ready.go:40] duration metric: took 1.605872095s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 06:56:26.195003  243963 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1228 06:56:26.196497  243963 out.go:179] * Done! kubectl is now configured to use "embed-certs-422591" cluster and "default" namespace by default
	W1228 06:56:22.478649  247213 node_ready.go:57] node "default-k8s-diff-port-500581" has "Ready":"False" status (will retry)
	W1228 06:56:24.977721  247213 node_ready.go:57] node "default-k8s-diff-port-500581" has "Ready":"False" status (will retry)
	I1228 06:56:26.478547  247213 node_ready.go:49] node "default-k8s-diff-port-500581" is "Ready"
	I1228 06:56:26.478581  247213 node_ready.go:38] duration metric: took 12.504894114s for node "default-k8s-diff-port-500581" to be "Ready" ...
	I1228 06:56:26.478597  247213 api_server.go:52] waiting for apiserver process to appear ...
	I1228 06:56:26.478645  247213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 06:56:26.500009  247213 api_server.go:72] duration metric: took 12.844753456s to wait for apiserver process to appear ...
	I1228 06:56:26.500069  247213 api_server.go:88] waiting for apiserver healthz status ...
	I1228 06:56:26.500092  247213 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1228 06:56:26.505791  247213 api_server.go:325] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1228 06:56:26.506819  247213 api_server.go:141] control plane version: v1.35.0
	I1228 06:56:26.506850  247213 api_server.go:131] duration metric: took 6.772745ms to wait for apiserver health ...
	I1228 06:56:26.506860  247213 system_pods.go:43] waiting for kube-system pods to appear ...
	I1228 06:56:26.511152  247213 system_pods.go:59] 8 kube-system pods found
	I1228 06:56:26.511188  247213 system_pods.go:61] "coredns-7d764666f9-9glh9" [9c46cd6f-643c-4dcd-9ffd-88becb063b24] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:56:26.511196  247213 system_pods.go:61] "etcd-default-k8s-diff-port-500581" [03cb3e5a-cdc5-4c45-983f-05b0f5326abe] Running
	I1228 06:56:26.511210  247213 system_pods.go:61] "kindnet-lsrww" [b5bd3c1b-325d-46fe-9378-779822f0ba5b] Running
	I1228 06:56:26.511217  247213 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-500581" [8a2ed107-3bb0-4c8c-bdff-d7011ae71058] Running
	I1228 06:56:26.511223  247213 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-500581" [933ddecb-a0d9-44c2-abb5-d7629a8107eb] Running
	I1228 06:56:26.511228  247213 system_pods.go:61] "kube-proxy-95gmh" [f25e4b21-a201-4838-b7c9-a5fde3304662] Running
	I1228 06:56:26.511237  247213 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-500581" [35caf688-1337-46ff-b025-5d59373ae8e4] Running
	I1228 06:56:26.511245  247213 system_pods.go:61] "storage-provisioner" [12b3784f-bffe-49c1-8915-2011c07bee4e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:56:26.511257  247213 system_pods.go:74] duration metric: took 4.390309ms to wait for pod list to return data ...
	I1228 06:56:26.511272  247213 default_sa.go:34] waiting for default service account to be created ...
	I1228 06:56:26.516259  247213 default_sa.go:45] found service account: "default"
	I1228 06:56:26.516290  247213 default_sa.go:55] duration metric: took 5.010014ms for default service account to be created ...
	I1228 06:56:26.516302  247213 system_pods.go:116] waiting for k8s-apps to be running ...
	I1228 06:56:26.522640  247213 system_pods.go:86] 8 kube-system pods found
	I1228 06:56:26.522682  247213 system_pods.go:89] "coredns-7d764666f9-9glh9" [9c46cd6f-643c-4dcd-9ffd-88becb063b24] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:56:26.522692  247213 system_pods.go:89] "etcd-default-k8s-diff-port-500581" [03cb3e5a-cdc5-4c45-983f-05b0f5326abe] Running
	I1228 06:56:26.522701  247213 system_pods.go:89] "kindnet-lsrww" [b5bd3c1b-325d-46fe-9378-779822f0ba5b] Running
	I1228 06:56:26.522706  247213 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-500581" [8a2ed107-3bb0-4c8c-bdff-d7011ae71058] Running
	I1228 06:56:26.522712  247213 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-500581" [933ddecb-a0d9-44c2-abb5-d7629a8107eb] Running
	I1228 06:56:26.522718  247213 system_pods.go:89] "kube-proxy-95gmh" [f25e4b21-a201-4838-b7c9-a5fde3304662] Running
	I1228 06:56:26.522725  247213 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-500581" [35caf688-1337-46ff-b025-5d59373ae8e4] Running
	I1228 06:56:26.522732  247213 system_pods.go:89] "storage-provisioner" [12b3784f-bffe-49c1-8915-2011c07bee4e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:56:26.522761  247213 retry.go:84] will retry after 200ms: missing components: kube-dns
	I1228 06:56:26.727648  247213 system_pods.go:86] 8 kube-system pods found
	I1228 06:56:26.727695  247213 system_pods.go:89] "coredns-7d764666f9-9glh9" [9c46cd6f-643c-4dcd-9ffd-88becb063b24] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:56:26.727705  247213 system_pods.go:89] "etcd-default-k8s-diff-port-500581" [03cb3e5a-cdc5-4c45-983f-05b0f5326abe] Running
	I1228 06:56:26.727714  247213 system_pods.go:89] "kindnet-lsrww" [b5bd3c1b-325d-46fe-9378-779822f0ba5b] Running
	I1228 06:56:26.727719  247213 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-500581" [8a2ed107-3bb0-4c8c-bdff-d7011ae71058] Running
	I1228 06:56:26.727726  247213 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-500581" [933ddecb-a0d9-44c2-abb5-d7629a8107eb] Running
	I1228 06:56:26.727733  247213 system_pods.go:89] "kube-proxy-95gmh" [f25e4b21-a201-4838-b7c9-a5fde3304662] Running
	I1228 06:56:26.727739  247213 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-500581" [35caf688-1337-46ff-b025-5d59373ae8e4] Running
	I1228 06:56:26.727753  247213 system_pods.go:89] "storage-provisioner" [12b3784f-bffe-49c1-8915-2011c07bee4e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:56:27.048953  247213 system_pods.go:86] 8 kube-system pods found
	I1228 06:56:27.048983  247213 system_pods.go:89] "coredns-7d764666f9-9glh9" [9c46cd6f-643c-4dcd-9ffd-88becb063b24] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:56:27.048988  247213 system_pods.go:89] "etcd-default-k8s-diff-port-500581" [03cb3e5a-cdc5-4c45-983f-05b0f5326abe] Running
	I1228 06:56:27.048995  247213 system_pods.go:89] "kindnet-lsrww" [b5bd3c1b-325d-46fe-9378-779822f0ba5b] Running
	I1228 06:56:27.048999  247213 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-500581" [8a2ed107-3bb0-4c8c-bdff-d7011ae71058] Running
	I1228 06:56:27.049002  247213 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-500581" [933ddecb-a0d9-44c2-abb5-d7629a8107eb] Running
	I1228 06:56:27.049006  247213 system_pods.go:89] "kube-proxy-95gmh" [f25e4b21-a201-4838-b7c9-a5fde3304662] Running
	I1228 06:56:27.049012  247213 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-500581" [35caf688-1337-46ff-b025-5d59373ae8e4] Running
	I1228 06:56:27.049019  247213 system_pods.go:89] "storage-provisioner" [12b3784f-bffe-49c1-8915-2011c07bee4e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:56:27.347697  247213 system_pods.go:86] 8 kube-system pods found
	I1228 06:56:27.347744  247213 system_pods.go:89] "coredns-7d764666f9-9glh9" [9c46cd6f-643c-4dcd-9ffd-88becb063b24] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:56:27.347753  247213 system_pods.go:89] "etcd-default-k8s-diff-port-500581" [03cb3e5a-cdc5-4c45-983f-05b0f5326abe] Running
	I1228 06:56:27.347761  247213 system_pods.go:89] "kindnet-lsrww" [b5bd3c1b-325d-46fe-9378-779822f0ba5b] Running
	I1228 06:56:27.347767  247213 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-500581" [8a2ed107-3bb0-4c8c-bdff-d7011ae71058] Running
	I1228 06:56:27.347773  247213 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-500581" [933ddecb-a0d9-44c2-abb5-d7629a8107eb] Running
	I1228 06:56:27.347779  247213 system_pods.go:89] "kube-proxy-95gmh" [f25e4b21-a201-4838-b7c9-a5fde3304662] Running
	I1228 06:56:27.347784  247213 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-500581" [35caf688-1337-46ff-b025-5d59373ae8e4] Running
	I1228 06:56:27.347792  247213 system_pods.go:89] "storage-provisioner" [12b3784f-bffe-49c1-8915-2011c07bee4e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:56:27.894612  247213 system_pods.go:86] 8 kube-system pods found
	I1228 06:56:27.894645  247213 system_pods.go:89] "coredns-7d764666f9-9glh9" [9c46cd6f-643c-4dcd-9ffd-88becb063b24] Running
	I1228 06:56:27.894654  247213 system_pods.go:89] "etcd-default-k8s-diff-port-500581" [03cb3e5a-cdc5-4c45-983f-05b0f5326abe] Running
	I1228 06:56:27.894661  247213 system_pods.go:89] "kindnet-lsrww" [b5bd3c1b-325d-46fe-9378-779822f0ba5b] Running
	I1228 06:56:27.894668  247213 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-500581" [8a2ed107-3bb0-4c8c-bdff-d7011ae71058] Running
	I1228 06:56:27.894674  247213 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-500581" [933ddecb-a0d9-44c2-abb5-d7629a8107eb] Running
	I1228 06:56:27.894747  247213 system_pods.go:89] "kube-proxy-95gmh" [f25e4b21-a201-4838-b7c9-a5fde3304662] Running
	I1228 06:56:27.894780  247213 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-500581" [35caf688-1337-46ff-b025-5d59373ae8e4] Running
	I1228 06:56:27.894786  247213 system_pods.go:89] "storage-provisioner" [12b3784f-bffe-49c1-8915-2011c07bee4e] Running
	I1228 06:56:27.894796  247213 system_pods.go:126] duration metric: took 1.378485807s to wait for k8s-apps to be running ...
	I1228 06:56:27.894807  247213 system_svc.go:44] waiting for kubelet service to be running ....
	I1228 06:56:27.894877  247213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:56:27.913725  247213 system_svc.go:56] duration metric: took 18.908162ms WaitForService to wait for kubelet
	I1228 06:56:27.913765  247213 kubeadm.go:587] duration metric: took 14.258529006s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 06:56:27.913788  247213 node_conditions.go:102] verifying NodePressure condition ...
	I1228 06:56:27.917024  247213 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1228 06:56:27.917082  247213 node_conditions.go:123] node cpu capacity is 8
	I1228 06:56:27.917101  247213 node_conditions.go:105] duration metric: took 3.307449ms to run NodePressure ...
	I1228 06:56:27.917117  247213 start.go:242] waiting for startup goroutines ...
	I1228 06:56:27.917128  247213 start.go:247] waiting for cluster config update ...
	I1228 06:56:27.917147  247213 start.go:256] writing updated cluster config ...
	I1228 06:56:27.917432  247213 ssh_runner.go:195] Run: rm -f paused
	I1228 06:56:27.922292  247213 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 06:56:27.928675  247213 pod_ready.go:83] waiting for pod "coredns-7d764666f9-9glh9" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:27.933976  247213 pod_ready.go:94] pod "coredns-7d764666f9-9glh9" is "Ready"
	I1228 06:56:27.934000  247213 pod_ready.go:86] duration metric: took 5.293782ms for pod "coredns-7d764666f9-9glh9" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:27.952822  247213 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:27.957941  247213 pod_ready.go:94] pod "etcd-default-k8s-diff-port-500581" is "Ready"
	I1228 06:56:27.957969  247213 pod_ready.go:86] duration metric: took 5.117578ms for pod "etcd-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:27.960256  247213 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:27.964517  247213 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-500581" is "Ready"
	I1228 06:56:27.964541  247213 pod_ready.go:86] duration metric: took 4.26155ms for pod "kube-apiserver-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:27.966612  247213 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:28.326675  247213 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-500581" is "Ready"
	I1228 06:56:28.326711  247213 pod_ready.go:86] duration metric: took 360.070556ms for pod "kube-controller-manager-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:28.527492  247213 pod_ready.go:83] waiting for pod "kube-proxy-95gmh" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:28.926562  247213 pod_ready.go:94] pod "kube-proxy-95gmh" is "Ready"
	I1228 06:56:28.926586  247213 pod_ready.go:86] duration metric: took 398.654778ms for pod "kube-proxy-95gmh" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:29.128257  247213 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:29.527347  247213 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-500581" is "Ready"
	I1228 06:56:29.527373  247213 pod_ready.go:86] duration metric: took 399.091542ms for pod "kube-scheduler-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:29.527384  247213 pod_ready.go:40] duration metric: took 1.605062412s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 06:56:29.572470  247213 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1228 06:56:29.574045  247213 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-500581" cluster and "default" namespace by default
	W1228 06:56:24.687607  252331 pod_ready.go:104] pod "coredns-7d764666f9-npk6g" is not "Ready", error: <nil>
	W1228 06:56:27.187235  252331 pod_ready.go:104] pod "coredns-7d764666f9-npk6g" is not "Ready", error: <nil>
	W1228 06:56:26.754423  242715 pod_ready.go:104] pod "coredns-5dd5756b68-f75js" is not "Ready", error: <nil>
	I1228 06:56:28.252283  242715 pod_ready.go:94] pod "coredns-5dd5756b68-f75js" is "Ready"
	I1228 06:56:28.252312  242715 pod_ready.go:86] duration metric: took 34.005583819s for pod "coredns-5dd5756b68-f75js" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:28.255219  242715 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-694122" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:28.259146  242715 pod_ready.go:94] pod "etcd-old-k8s-version-694122" is "Ready"
	I1228 06:56:28.259168  242715 pod_ready.go:86] duration metric: took 3.930339ms for pod "etcd-old-k8s-version-694122" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:28.261639  242715 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-694122" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:28.265232  242715 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-694122" is "Ready"
	I1228 06:56:28.265251  242715 pod_ready.go:86] duration metric: took 3.589847ms for pod "kube-apiserver-old-k8s-version-694122" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:28.267802  242715 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-694122" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:28.450233  242715 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-694122" is "Ready"
	I1228 06:56:28.450266  242715 pod_ready.go:86] duration metric: took 182.442698ms for pod "kube-controller-manager-old-k8s-version-694122" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:28.651005  242715 pod_ready.go:83] waiting for pod "kube-proxy-ckjcc" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:29.050020  242715 pod_ready.go:94] pod "kube-proxy-ckjcc" is "Ready"
	I1228 06:56:29.050071  242715 pod_ready.go:86] duration metric: took 399.008645ms for pod "kube-proxy-ckjcc" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:29.250805  242715 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-694122" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:29.650219  242715 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-694122" is "Ready"
	I1228 06:56:29.650260  242715 pod_ready.go:86] duration metric: took 399.415539ms for pod "kube-scheduler-old-k8s-version-694122" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:29.650277  242715 pod_ready.go:40] duration metric: took 35.408765036s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 06:56:29.699567  242715 start.go:625] kubectl: 1.35.0, cluster: 1.28.0 (minor skew: 7)
	I1228 06:56:29.701172  242715 out.go:203] 
	W1228 06:56:29.702316  242715 out.go:285] ! /usr/local/bin/kubectl is version 1.35.0, which may have incompatibilities with Kubernetes 1.28.0.
	I1228 06:56:29.703412  242715 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1228 06:56:29.704563  242715 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-694122" cluster and "default" namespace by default
	W1228 06:56:29.687654  252331 pod_ready.go:104] pod "coredns-7d764666f9-npk6g" is not "Ready", error: <nil>
	W1228 06:56:32.186292  252331 pod_ready.go:104] pod "coredns-7d764666f9-npk6g" is not "Ready", error: <nil>
	W1228 06:56:34.688806  252331 pod_ready.go:104] pod "coredns-7d764666f9-npk6g" is not "Ready", error: <nil>
	W1228 06:56:37.186324  252331 pod_ready.go:104] pod "coredns-7d764666f9-npk6g" is not "Ready", error: <nil>
	W1228 06:56:39.686740  252331 pod_ready.go:104] pod "coredns-7d764666f9-npk6g" is not "Ready", error: <nil>
	W1228 06:56:42.187036  252331 pod_ready.go:104] pod "coredns-7d764666f9-npk6g" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 28 06:56:12 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:12.583861747Z" level=info msg="Started container" PID=1749 containerID=c94cc1ce890a730cde51c0fd5c25d3ab34d128a957c2289626ac8eee4aac68c1 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mkkbk/dashboard-metrics-scraper id=d8f72510-df38-4a1e-b66d-8200ed7fcfac name=/runtime.v1.RuntimeService/StartContainer sandboxID=0808fc8b39668eb765d43cd53be6d550282e68223c13e4c44416f894f4c89a48
	Dec 28 06:56:13 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:13.530022934Z" level=info msg="Removing container: b2f97b7a4836f393a623ba91543550ff23480a382de7fd86c242856a2df14590" id=154e6c22-398f-4ba2-ab27-5461aa61feb4 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 28 06:56:13 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:13.540722929Z" level=info msg="Removed container b2f97b7a4836f393a623ba91543550ff23480a382de7fd86c242856a2df14590: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mkkbk/dashboard-metrics-scraper" id=154e6c22-398f-4ba2-ab27-5461aa61feb4 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 28 06:56:24 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:24.56020008Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=6242be8e-0820-4d36-a206-dfc6457fbd5a name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:56:24 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:24.563073552Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=3f8fe7a6-c2ae-4b26-af3a-d75d7f6379d6 name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:56:24 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:24.564501421Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=8d1349e6-709e-487a-8a7a-2ffab78862b7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:56:24 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:24.564647689Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:56:24 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:24.571618039Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:56:24 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:24.57204935Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/6107831efab17c6445982290999edcdfbf3a71ff069ed50755394b0bb934f622/merged/etc/passwd: no such file or directory"
	Dec 28 06:56:24 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:24.572086214Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/6107831efab17c6445982290999edcdfbf3a71ff069ed50755394b0bb934f622/merged/etc/group: no such file or directory"
	Dec 28 06:56:24 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:24.572398901Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:56:24 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:24.600189949Z" level=info msg="Created container d7436b6e793ed053cdb548e87a1b716f0abde877c580e91628b6947ecedb7500: kube-system/storage-provisioner/storage-provisioner" id=8d1349e6-709e-487a-8a7a-2ffab78862b7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:56:24 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:24.600783768Z" level=info msg="Starting container: d7436b6e793ed053cdb548e87a1b716f0abde877c580e91628b6947ecedb7500" id=533fb44d-8a59-4498-b438-60834329cc4b name=/runtime.v1.RuntimeService/StartContainer
	Dec 28 06:56:24 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:24.60266285Z" level=info msg="Started container" PID=1764 containerID=d7436b6e793ed053cdb548e87a1b716f0abde877c580e91628b6947ecedb7500 description=kube-system/storage-provisioner/storage-provisioner id=533fb44d-8a59-4498-b438-60834329cc4b name=/runtime.v1.RuntimeService/StartContainer sandboxID=4214aa96db9c297a97f32344eb657fdea04fd2b6d854cc7e84324a2dc8dc18fd
	Dec 28 06:56:26 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:26.434631051Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=dc4e48b5-37de-48a9-a58b-6796bbb51523 name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:56:26 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:26.435689897Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=077bdfb4-65a2-4682-8f7b-28037c07eee6 name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:56:26 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:26.436847019Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mkkbk/dashboard-metrics-scraper" id=4c206b6f-dd78-452f-8e56-04cb49c41a75 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:56:26 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:26.437008869Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:56:26 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:26.444452577Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:56:26 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:26.445249606Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:56:26 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:26.486789085Z" level=info msg="Created container 634b121325efb9b7ad4cb6a1e52246b51f8cdc8531f6282f6fa6f066be872bc0: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mkkbk/dashboard-metrics-scraper" id=4c206b6f-dd78-452f-8e56-04cb49c41a75 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:56:26 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:26.48757547Z" level=info msg="Starting container: 634b121325efb9b7ad4cb6a1e52246b51f8cdc8531f6282f6fa6f066be872bc0" id=2c5c78bc-a428-4228-8e7c-193ba7dbdcc1 name=/runtime.v1.RuntimeService/StartContainer
	Dec 28 06:56:26 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:26.491192029Z" level=info msg="Started container" PID=1779 containerID=634b121325efb9b7ad4cb6a1e52246b51f8cdc8531f6282f6fa6f066be872bc0 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mkkbk/dashboard-metrics-scraper id=2c5c78bc-a428-4228-8e7c-193ba7dbdcc1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0808fc8b39668eb765d43cd53be6d550282e68223c13e4c44416f894f4c89a48
	Dec 28 06:56:26 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:26.568854047Z" level=info msg="Removing container: c94cc1ce890a730cde51c0fd5c25d3ab34d128a957c2289626ac8eee4aac68c1" id=a4054836-346a-45f2-9a60-a74395d55074 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 28 06:56:26 old-k8s-version-694122 crio[564]: time="2025-12-28T06:56:26.579146825Z" level=info msg="Removed container c94cc1ce890a730cde51c0fd5c25d3ab34d128a957c2289626ac8eee4aac68c1: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mkkbk/dashboard-metrics-scraper" id=a4054836-346a-45f2-9a60-a74395d55074 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	634b121325efb       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago      Exited              dashboard-metrics-scraper   2                   0808fc8b39668       dashboard-metrics-scraper-5f989dc9cf-mkkbk       kubernetes-dashboard
	d7436b6e793ed       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         2                   4214aa96db9c2       storage-provisioner                              kube-system
	10a25e4b39812       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   37 seconds ago      Running             kubernetes-dashboard        0                   b3037a7a10e70       kubernetes-dashboard-8694d4445c-qf9rt            kubernetes-dashboard
	6639adc29af47       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   52ceeafae214e       busybox                                          default
	182591cdfe4f6       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           52 seconds ago      Running             coredns                     1                   17f02b5b39d0f       coredns-5dd5756b68-f75js                         kube-system
	5bcdfe2687f64       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           52 seconds ago      Running             kindnet-cni                 1                   07e71cbc44781       kindnet-v7rhd                                    kube-system
	b8f42856aab91       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           52 seconds ago      Running             kube-proxy                  1                   062e68583870d       kube-proxy-ckjcc                                 kube-system
	a008846c26c9e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         1                   4214aa96db9c2       storage-provisioner                              kube-system
	55724a0fe5b72       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           56 seconds ago      Running             kube-apiserver              1                   8a6c7392541b8       kube-apiserver-old-k8s-version-694122            kube-system
	6fcb9aa2e6b19       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           56 seconds ago      Running             etcd                        1                   bcabddefdb866       etcd-old-k8s-version-694122                      kube-system
	c9544b4339f1c       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           56 seconds ago      Running             kube-scheduler              1                   da915cd0b9c90       kube-scheduler-old-k8s-version-694122            kube-system
	f8f115ceec0e8       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           56 seconds ago      Running             kube-controller-manager     1                   8af419195ec82       kube-controller-manager-old-k8s-version-694122   kube-system
	
	
	==> describe nodes <==
	Name:               old-k8s-version-694122
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-694122
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba
	                    minikube.k8s.io/name=old-k8s-version-694122
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_28T06_54_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Dec 2025 06:54:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-694122
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 28 Dec 2025 06:56:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 28 Dec 2025 06:56:23 +0000   Sun, 28 Dec 2025 06:54:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 28 Dec 2025 06:56:23 +0000   Sun, 28 Dec 2025 06:54:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 28 Dec 2025 06:56:23 +0000   Sun, 28 Dec 2025 06:54:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 28 Dec 2025 06:56:23 +0000   Sun, 28 Dec 2025 06:55:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-694122
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 493159aea3d8b8768b108b926950835d
	  System UUID:                65f3a296-84a7-49ed-b5c2-55741073e206
	  Boot ID:                    e7a1d175-ccf2-4135-b9c7-3a9f70f4c4af
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-5dd5756b68-f75js                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-old-k8s-version-694122                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         119s
	  kube-system                 kindnet-v7rhd                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-old-k8s-version-694122             250m (3%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-controller-manager-old-k8s-version-694122    200m (2%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-proxy-ckjcc                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-old-k8s-version-694122             100m (1%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-mkkbk        0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-qf9rt             0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 104s                 kube-proxy       
	  Normal  Starting                 52s                  kube-proxy       
	  Normal  Starting                 2m4s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m4s (x9 over 2m4s)  kubelet          Node old-k8s-version-694122 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m4s (x8 over 2m4s)  kubelet          Node old-k8s-version-694122 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m4s (x7 over 2m4s)  kubelet          Node old-k8s-version-694122 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    119s                 kubelet          Node old-k8s-version-694122 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  119s                 kubelet          Node old-k8s-version-694122 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     119s                 kubelet          Node old-k8s-version-694122 status is now: NodeHasSufficientPID
	  Normal  Starting                 119s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           106s                 node-controller  Node old-k8s-version-694122 event: Registered Node old-k8s-version-694122 in Controller
	  Normal  NodeReady                92s                  kubelet          Node old-k8s-version-694122 status is now: NodeReady
	  Normal  Starting                 57s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)    kubelet          Node old-k8s-version-694122 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)    kubelet          Node old-k8s-version-694122 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)    kubelet          Node old-k8s-version-694122 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           41s                  node-controller  Node old-k8s-version-694122 event: Registered Node old-k8s-version-694122 in Controller
	
	
	==> dmesg <==
	[Dec28 06:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001811] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000998] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.386099] i8042: Warning: Keylock active
	[  +0.010472] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.485785] block sda: the capability attribute has been deprecated.
	[  +0.082391] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024584] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.071522] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 06:56:46 up 39 min,  0 user,  load average: 3.07, 2.66, 1.73
	Linux old-k8s-version-694122 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 28 06:56:05 old-k8s-version-694122 kubelet[723]: I1228 06:56:05.336847     723 topology_manager.go:215] "Topology Admit Handler" podUID="ce4bfb24-cae7-489f-9305-08e4a6df88cc" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-mkkbk"
	Dec 28 06:56:05 old-k8s-version-694122 kubelet[723]: I1228 06:56:05.357506     723 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl6j7\" (UniqueName: \"kubernetes.io/projected/3117b9db-546f-40fa-8346-edd08efb1341-kube-api-access-sl6j7\") pod \"kubernetes-dashboard-8694d4445c-qf9rt\" (UID: \"3117b9db-546f-40fa-8346-edd08efb1341\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-qf9rt"
	Dec 28 06:56:05 old-k8s-version-694122 kubelet[723]: I1228 06:56:05.357560     723 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ce4bfb24-cae7-489f-9305-08e4a6df88cc-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-mkkbk\" (UID: \"ce4bfb24-cae7-489f-9305-08e4a6df88cc\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mkkbk"
	Dec 28 06:56:05 old-k8s-version-694122 kubelet[723]: I1228 06:56:05.357590     723 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/3117b9db-546f-40fa-8346-edd08efb1341-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-qf9rt\" (UID: \"3117b9db-546f-40fa-8346-edd08efb1341\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-qf9rt"
	Dec 28 06:56:05 old-k8s-version-694122 kubelet[723]: I1228 06:56:05.357620     723 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq4qn\" (UniqueName: \"kubernetes.io/projected/ce4bfb24-cae7-489f-9305-08e4a6df88cc-kube-api-access-zq4qn\") pod \"dashboard-metrics-scraper-5f989dc9cf-mkkbk\" (UID: \"ce4bfb24-cae7-489f-9305-08e4a6df88cc\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mkkbk"
	Dec 28 06:56:12 old-k8s-version-694122 kubelet[723]: I1228 06:56:12.523613     723 scope.go:117] "RemoveContainer" containerID="b2f97b7a4836f393a623ba91543550ff23480a382de7fd86c242856a2df14590"
	Dec 28 06:56:12 old-k8s-version-694122 kubelet[723]: I1228 06:56:12.539464     723 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-qf9rt" podStartSLOduration=3.93451934 podCreationTimestamp="2025-12-28 06:56:05 +0000 UTC" firstStartedPulling="2025-12-28 06:56:05.671171109 +0000 UTC m=+16.408637525" lastFinishedPulling="2025-12-28 06:56:09.27605435 +0000 UTC m=+20.013520773" observedRunningTime="2025-12-28 06:56:09.548570585 +0000 UTC m=+20.286037009" watchObservedRunningTime="2025-12-28 06:56:12.539402588 +0000 UTC m=+23.276869011"
	Dec 28 06:56:13 old-k8s-version-694122 kubelet[723]: I1228 06:56:13.528390     723 scope.go:117] "RemoveContainer" containerID="b2f97b7a4836f393a623ba91543550ff23480a382de7fd86c242856a2df14590"
	Dec 28 06:56:13 old-k8s-version-694122 kubelet[723]: I1228 06:56:13.528574     723 scope.go:117] "RemoveContainer" containerID="c94cc1ce890a730cde51c0fd5c25d3ab34d128a957c2289626ac8eee4aac68c1"
	Dec 28 06:56:13 old-k8s-version-694122 kubelet[723]: E1228 06:56:13.528926     723 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-mkkbk_kubernetes-dashboard(ce4bfb24-cae7-489f-9305-08e4a6df88cc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mkkbk" podUID="ce4bfb24-cae7-489f-9305-08e4a6df88cc"
	Dec 28 06:56:14 old-k8s-version-694122 kubelet[723]: I1228 06:56:14.532491     723 scope.go:117] "RemoveContainer" containerID="c94cc1ce890a730cde51c0fd5c25d3ab34d128a957c2289626ac8eee4aac68c1"
	Dec 28 06:56:14 old-k8s-version-694122 kubelet[723]: E1228 06:56:14.532843     723 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-mkkbk_kubernetes-dashboard(ce4bfb24-cae7-489f-9305-08e4a6df88cc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mkkbk" podUID="ce4bfb24-cae7-489f-9305-08e4a6df88cc"
	Dec 28 06:56:15 old-k8s-version-694122 kubelet[723]: I1228 06:56:15.639218     723 scope.go:117] "RemoveContainer" containerID="c94cc1ce890a730cde51c0fd5c25d3ab34d128a957c2289626ac8eee4aac68c1"
	Dec 28 06:56:15 old-k8s-version-694122 kubelet[723]: E1228 06:56:15.639481     723 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-mkkbk_kubernetes-dashboard(ce4bfb24-cae7-489f-9305-08e4a6df88cc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mkkbk" podUID="ce4bfb24-cae7-489f-9305-08e4a6df88cc"
	Dec 28 06:56:24 old-k8s-version-694122 kubelet[723]: I1228 06:56:24.559220     723 scope.go:117] "RemoveContainer" containerID="a008846c26c9eef13eef5f37d0f0812a09129981cff37d7db2ce2c8b39b00d8c"
	Dec 28 06:56:26 old-k8s-version-694122 kubelet[723]: I1228 06:56:26.433888     723 scope.go:117] "RemoveContainer" containerID="c94cc1ce890a730cde51c0fd5c25d3ab34d128a957c2289626ac8eee4aac68c1"
	Dec 28 06:56:26 old-k8s-version-694122 kubelet[723]: I1228 06:56:26.567464     723 scope.go:117] "RemoveContainer" containerID="c94cc1ce890a730cde51c0fd5c25d3ab34d128a957c2289626ac8eee4aac68c1"
	Dec 28 06:56:26 old-k8s-version-694122 kubelet[723]: I1228 06:56:26.567775     723 scope.go:117] "RemoveContainer" containerID="634b121325efb9b7ad4cb6a1e52246b51f8cdc8531f6282f6fa6f066be872bc0"
	Dec 28 06:56:26 old-k8s-version-694122 kubelet[723]: E1228 06:56:26.568346     723 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-mkkbk_kubernetes-dashboard(ce4bfb24-cae7-489f-9305-08e4a6df88cc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mkkbk" podUID="ce4bfb24-cae7-489f-9305-08e4a6df88cc"
	Dec 28 06:56:35 old-k8s-version-694122 kubelet[723]: I1228 06:56:35.639392     723 scope.go:117] "RemoveContainer" containerID="634b121325efb9b7ad4cb6a1e52246b51f8cdc8531f6282f6fa6f066be872bc0"
	Dec 28 06:56:35 old-k8s-version-694122 kubelet[723]: E1228 06:56:35.639655     723 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-mkkbk_kubernetes-dashboard(ce4bfb24-cae7-489f-9305-08e4a6df88cc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mkkbk" podUID="ce4bfb24-cae7-489f-9305-08e4a6df88cc"
	Dec 28 06:56:41 old-k8s-version-694122 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 28 06:56:41 old-k8s-version-694122 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 28 06:56:41 old-k8s-version-694122 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 06:56:41 old-k8s-version-694122 systemd[1]: kubelet.service: Consumed 1.537s CPU time.
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1228 06:56:45.983676  259271 logs.go:279] Failed to list containers for "kube-apiserver": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:45Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:56:46.047460  259271 logs.go:279] Failed to list containers for "etcd": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:46Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:56:46.110200  259271 logs.go:279] Failed to list containers for "coredns": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:46Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:56:46.173518  259271 logs.go:279] Failed to list containers for "kube-scheduler": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:46Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:56:46.237653  259271 logs.go:279] Failed to list containers for "kube-proxy": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:46Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:56:46.300124  259271 logs.go:279] Failed to list containers for "kube-controller-manager": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:46Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:56:46.362036  259271 logs.go:279] Failed to list containers for "kindnet": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:46Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:56:46.422021  259271 logs.go:279] Failed to list containers for "kubernetes-dashboard": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:46Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:56:46.481860  259271 logs.go:279] Failed to list containers for "storage-provisioner": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:46Z" level=error msg="open /run/runc: no such file or directory"

                                                
                                                
** /stderr **
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-694122 -n old-k8s-version-694122
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-694122 -n old-k8s-version-694122: exit status 2 (326.027076ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-694122 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (5.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-950460 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-950460 --alsologtostderr -v=1: exit status 80 (1.761320738s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-950460 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:57:09.541556  267557 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:57:09.541674  267557 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:57:09.541685  267557 out.go:374] Setting ErrFile to fd 2...
	I1228 06:57:09.541692  267557 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:57:09.541969  267557 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:57:09.542312  267557 out.go:368] Setting JSON to false
	I1228 06:57:09.542330  267557 mustload.go:66] Loading cluster: no-preload-950460
	I1228 06:57:09.542783  267557 config.go:182] Loaded profile config "no-preload-950460": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:57:09.543381  267557 cli_runner.go:164] Run: docker container inspect no-preload-950460 --format={{.State.Status}}
	I1228 06:57:09.568706  267557 host.go:66] Checking if "no-preload-950460" exists ...
	I1228 06:57:09.569153  267557 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:57:09.646846  267557 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:81 OomKillDisable:false NGoroutines:87 SystemTime:2025-12-28 06:57:09.634525154 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:57:09.648441  267557 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22351/minikube-v1.37.0-1766883634-22351-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766883634-22351/minikube-v1.37.0-1766883634-22351-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766883634-22351-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:no-preload-950460 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool
=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1228 06:57:09.651064  267557 out.go:179] * Pausing node no-preload-950460 ... 
	I1228 06:57:09.652639  267557 host.go:66] Checking if "no-preload-950460" exists ...
	I1228 06:57:09.652978  267557 ssh_runner.go:195] Run: systemctl --version
	I1228 06:57:09.653018  267557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-950460
	I1228 06:57:09.679195  267557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/no-preload-950460/id_rsa Username:docker}
	I1228 06:57:09.785342  267557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:57:09.801097  267557 pause.go:52] kubelet running: true
	I1228 06:57:09.801214  267557 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1228 06:57:10.004589  267557 ssh_runner.go:195] Run: sudo crio config
	I1228 06:57:10.071455  267557 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:57:10.086844  267557 retry.go:84] will retry after 400ms: list running: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:10Z" level=error msg="open /run/runc: no such file or directory"
	I1228 06:57:10.454258  267557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:57:10.471568  267557 pause.go:52] kubelet running: false
	I1228 06:57:10.471632  267557 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1228 06:57:10.669460  267557 ssh_runner.go:195] Run: sudo crio config
	I1228 06:57:10.731532  267557 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:57:10.982655  267557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:57:10.997274  267557 pause.go:52] kubelet running: false
	I1228 06:57:10.997333  267557 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1228 06:57:11.155900  267557 ssh_runner.go:195] Run: sudo crio config
	I1228 06:57:11.211695  267557 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:57:11.225580  267557 out.go:203] 
	W1228 06:57:11.226805  267557 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1228 06:57:11.226824  267557 out.go:285] * 
	* 
	W1228 06:57:11.228791  267557 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1228 06:57:11.229958  267557 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-950460 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-950460
helpers_test.go:244: (dbg) docker inspect no-preload-950460:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7db017036a6f30a171f925d59009395ab52e0e628d6007614a4cc984fdf39137",
	        "Created": "2025-12-28T06:55:00.893625015Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 252535,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-28T06:56:09.982557423Z",
	            "FinishedAt": "2025-12-28T06:56:08.019682014Z"
	        },
	        "Image": "sha256:8b8cccb9afb2a57c3d011fcf33e0403b1551aa7036e30b12a395646869801935",
	        "ResolvConfPath": "/var/lib/docker/containers/7db017036a6f30a171f925d59009395ab52e0e628d6007614a4cc984fdf39137/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7db017036a6f30a171f925d59009395ab52e0e628d6007614a4cc984fdf39137/hostname",
	        "HostsPath": "/var/lib/docker/containers/7db017036a6f30a171f925d59009395ab52e0e628d6007614a4cc984fdf39137/hosts",
	        "LogPath": "/var/lib/docker/containers/7db017036a6f30a171f925d59009395ab52e0e628d6007614a4cc984fdf39137/7db017036a6f30a171f925d59009395ab52e0e628d6007614a4cc984fdf39137-json.log",
	        "Name": "/no-preload-950460",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-950460:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-950460",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7db017036a6f30a171f925d59009395ab52e0e628d6007614a4cc984fdf39137",
	                "LowerDir": "/var/lib/docker/overlay2/054301f245be985309742daf824fbdce12364ee376445d3bf62cf3ee351edbca-init/diff:/var/lib/docker/overlay2/69e554713d6cc3cb33e7ea5f93430536a8ca0db38320574d3719c26f00b2f62c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/054301f245be985309742daf824fbdce12364ee376445d3bf62cf3ee351edbca/merged",
	                "UpperDir": "/var/lib/docker/overlay2/054301f245be985309742daf824fbdce12364ee376445d3bf62cf3ee351edbca/diff",
	                "WorkDir": "/var/lib/docker/overlay2/054301f245be985309742daf824fbdce12364ee376445d3bf62cf3ee351edbca/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-950460",
	                "Source": "/var/lib/docker/volumes/no-preload-950460/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-950460",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-950460",
	                "name.minikube.sigs.k8s.io": "no-preload-950460",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "f4fcd0aa467545a18fbda5e7e520616ea83d2e9b4f2c45d5573f4c9b0e4b1362",
	            "SandboxKey": "/var/run/docker/netns/f4fcd0aa4675",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-950460": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "17f0dbca0a318f9427a146748bffe1e85820955f787d11210b299ebcf405441e",
	                    "EndpointID": "87c66e7af727b8b5210d0bc65bb4faef63cfac5d88ac700fba4088acaa66c4bc",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "36:2b:81:9b:ac:c6",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-950460",
	                        "7db017036a6f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-950460 -n no-preload-950460
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-950460 -n no-preload-950460: exit status 2 (351.551425ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-950460 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-950460 logs -n 25: (1.367553772s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ test-preload-785573 image list                                                                                                                                                                                                                │ test-preload-785573          │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ delete  │ -p test-preload-785573                                                                                                                                                                                                                        │ test-preload-785573          │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ start   │ -p embed-certs-422591 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:56 UTC │
	│ delete  │ -p stopped-upgrade-416029                                                                                                                                                                                                                     │ stopped-upgrade-416029       │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ delete  │ -p disable-driver-mounts-719168                                                                                                                                                                                                               │ disable-driver-mounts-719168 │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ start   │ -p default-k8s-diff-port-500581 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:56 UTC │
	│ addons  │ enable metrics-server -p no-preload-950460 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │                     │
	│ stop    │ -p no-preload-950460 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:56 UTC │
	│ addons  │ enable dashboard -p no-preload-950460 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ start   │ -p no-preload-950460 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ addons  │ enable metrics-server -p embed-certs-422591 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	│ stop    │ -p embed-certs-422591 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-500581 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-500581 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ image   │ old-k8s-version-694122 image list --format=json                                                                                                                                                                                               │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ pause   │ -p old-k8s-version-694122 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	│ delete  │ -p old-k8s-version-694122                                                                                                                                                                                                                     │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ addons  │ enable dashboard -p embed-certs-422591 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ start   │ -p embed-certs-422591 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	│ delete  │ -p old-k8s-version-694122                                                                                                                                                                                                                     │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ start   │ -p newest-cni-479871 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-500581 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ start   │ -p default-k8s-diff-port-500581 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	│ image   │ no-preload-950460 image list --format=json                                                                                                                                                                                                    │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ pause   │ -p no-preload-950460 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 06:56:51
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 06:56:51.304822  261568 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:56:51.304949  261568 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:56:51.304962  261568 out.go:374] Setting ErrFile to fd 2...
	I1228 06:56:51.304969  261568 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:56:51.305236  261568 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:56:51.305658  261568 out.go:368] Setting JSON to false
	I1228 06:56:51.306949  261568 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2363,"bootTime":1766902648,"procs":474,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1228 06:56:51.306998  261568 start.go:143] virtualization: kvm guest
	I1228 06:56:51.312562  261568 out.go:179] * [default-k8s-diff-port-500581] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1228 06:56:51.313893  261568 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 06:56:51.313933  261568 notify.go:221] Checking for updates...
	I1228 06:56:51.316760  261568 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 06:56:51.318014  261568 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:56:51.322529  261568 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-5550/.minikube
	I1228 06:56:51.323905  261568 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1228 06:56:51.325197  261568 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 06:56:51.326905  261568 config.go:182] Loaded profile config "default-k8s-diff-port-500581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:56:51.327673  261568 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 06:56:51.352695  261568 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1228 06:56:51.352843  261568 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:56:51.414000  261568 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:83 SystemTime:2025-12-28 06:56:51.40353353 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:56:51.414142  261568 docker.go:319] overlay module found
	I1228 06:56:51.418800  261568 out.go:179] * Using the docker driver based on existing profile
	I1228 06:56:51.419979  261568 start.go:309] selected driver: docker
	I1228 06:56:51.419992  261568 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-500581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-500581 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:56:51.420098  261568 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 06:56:51.420695  261568 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:56:51.478184  261568 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:83 SystemTime:2025-12-28 06:56:51.468547864 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:56:51.478493  261568 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 06:56:51.478528  261568 cni.go:84] Creating CNI manager for ""
	I1228 06:56:51.478601  261568 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1228 06:56:51.478656  261568 start.go:353] cluster config:
	{Name:default-k8s-diff-port-500581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-500581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:56:51.480689  261568 out.go:179] * Starting "default-k8s-diff-port-500581" primary control-plane node in "default-k8s-diff-port-500581" cluster
	I1228 06:56:51.482007  261568 cache.go:134] Beginning downloading kic base image for docker with crio
	I1228 06:56:51.483353  261568 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 06:56:51.484469  261568 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1228 06:56:51.484517  261568 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22352-5550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1228 06:56:51.484526  261568 cache.go:65] Caching tarball of preloaded images
	I1228 06:56:51.484594  261568 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 06:56:51.484617  261568 preload.go:251] Found /home/jenkins/minikube-integration/22352-5550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1228 06:56:51.484731  261568 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1228 06:56:51.484886  261568 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/default-k8s-diff-port-500581/config.json ...
	I1228 06:56:51.507639  261568 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 06:56:51.507662  261568 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
	I1228 06:56:51.507678  261568 cache.go:243] Successfully downloaded all kic artifacts
	I1228 06:56:51.507716  261568 start.go:360] acquireMachinesLock for default-k8s-diff-port-500581: {Name:mk09ab6a942c8bf16d457c533e6be9200b317247 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:56:51.507793  261568 start.go:364] duration metric: took 42.618µs to acquireMachinesLock for "default-k8s-diff-port-500581"
	I1228 06:56:51.507811  261568 start.go:96] Skipping create...Using existing machine configuration
	I1228 06:56:51.507818  261568 fix.go:54] fixHost starting: 
	I1228 06:56:51.508017  261568 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-500581 --format={{.State.Status}}
	I1228 06:56:51.526407  261568 fix.go:112] recreateIfNeeded on default-k8s-diff-port-500581: state=Stopped err=<nil>
	W1228 06:56:51.526437  261568 fix.go:138] unexpected machine state, will restart: <nil>
	I1228 06:56:49.299782  260283 out.go:252] * Restarting existing docker container for "embed-certs-422591" ...
	I1228 06:56:49.299856  260283 cli_runner.go:164] Run: docker start embed-certs-422591
	I1228 06:56:50.029376  260283 cli_runner.go:164] Run: docker container inspect embed-certs-422591 --format={{.State.Status}}
	I1228 06:56:50.048972  260283 kic.go:430] container "embed-certs-422591" state is running.
	I1228 06:56:50.049416  260283 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-422591
	I1228 06:56:50.070752  260283 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/embed-certs-422591/config.json ...
	I1228 06:56:50.070988  260283 machine.go:94] provisionDockerMachine start ...
	I1228 06:56:50.071086  260283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422591
	I1228 06:56:50.094281  260283 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:50.094592  260283 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1228 06:56:50.094614  260283 main.go:144] libmachine: About to run SSH command:
	hostname
	I1228 06:56:50.095430  260283 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:32768->127.0.0.1:33083: read: connection reset by peer
	I1228 06:56:53.224998  260283 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-422591
	
	I1228 06:56:53.225041  260283 ubuntu.go:182] provisioning hostname "embed-certs-422591"
	I1228 06:56:53.225100  260283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422591
	I1228 06:56:53.244551  260283 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:53.244828  260283 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1228 06:56:53.244846  260283 main.go:144] libmachine: About to run SSH command:
	sudo hostname embed-certs-422591 && echo "embed-certs-422591" | sudo tee /etc/hostname
	I1228 06:56:53.389453  260283 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-422591
	
	I1228 06:56:53.389539  260283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422591
	I1228 06:56:53.409408  260283 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:53.409692  260283 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1228 06:56:53.409717  260283 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-422591' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-422591/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-422591' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1228 06:56:53.535649  260283 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1228 06:56:53.535685  260283 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22352-5550/.minikube CaCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22352-5550/.minikube}
	I1228 06:56:53.535733  260283 ubuntu.go:190] setting up certificates
	I1228 06:56:53.535752  260283 provision.go:84] configureAuth start
	I1228 06:56:53.535838  260283 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-422591
	I1228 06:56:53.554332  260283 provision.go:143] copyHostCerts
	I1228 06:56:53.554402  260283 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem, removing ...
	I1228 06:56:53.554423  260283 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem
	I1228 06:56:53.554514  260283 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem (1082 bytes)
	I1228 06:56:53.554657  260283 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem, removing ...
	I1228 06:56:53.554671  260283 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem
	I1228 06:56:53.554718  260283 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem (1123 bytes)
	I1228 06:56:53.554817  260283 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem, removing ...
	I1228 06:56:53.554834  260283 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem
	I1228 06:56:53.554898  260283 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem (1679 bytes)
	I1228 06:56:53.554996  260283 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem org=jenkins.embed-certs-422591 san=[127.0.0.1 192.168.76.2 embed-certs-422591 localhost minikube]
	I1228 06:56:53.616863  260283 provision.go:177] copyRemoteCerts
	I1228 06:56:53.616949  260283 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1228 06:56:53.616995  260283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422591
	I1228 06:56:53.635721  260283 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/embed-certs-422591/id_rsa Username:docker}
	I1228 06:56:53.727300  260283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1228 06:56:53.745199  260283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1228 06:56:53.763059  260283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1228 06:56:53.779536  260283 provision.go:87] duration metric: took 243.761087ms to configureAuth
	I1228 06:56:53.779563  260283 ubuntu.go:206] setting minikube options for container-runtime
	I1228 06:56:53.779720  260283 config.go:182] Loaded profile config "embed-certs-422591": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:56:53.779833  260283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422591
	I1228 06:56:53.797684  260283 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:53.797962  260283 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1228 06:56:53.797993  260283 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1228 06:56:51.187721  252331 pod_ready.go:104] pod "coredns-7d764666f9-npk6g" is not "Ready", error: <nil>
	W1228 06:56:53.686879  252331 pod_ready.go:104] pod "coredns-7d764666f9-npk6g" is not "Ready", error: <nil>
	I1228 06:56:50.480049  260915 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1228 06:56:50.480300  260915 start.go:159] libmachine.API.Create for "newest-cni-479871" (driver="docker")
	I1228 06:56:50.480357  260915 client.go:173] LocalClient.Create starting
	I1228 06:56:50.480438  260915 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem
	I1228 06:56:50.480482  260915 main.go:144] libmachine: Decoding PEM data...
	I1228 06:56:50.480504  260915 main.go:144] libmachine: Parsing certificate...
	I1228 06:56:50.480573  260915 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem
	I1228 06:56:50.480601  260915 main.go:144] libmachine: Decoding PEM data...
	I1228 06:56:50.480625  260915 main.go:144] libmachine: Parsing certificate...
	I1228 06:56:50.481050  260915 cli_runner.go:164] Run: docker network inspect newest-cni-479871 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1228 06:56:50.497636  260915 cli_runner.go:211] docker network inspect newest-cni-479871 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1228 06:56:50.497706  260915 network_create.go:284] running [docker network inspect newest-cni-479871] to gather additional debugging logs...
	I1228 06:56:50.497723  260915 cli_runner.go:164] Run: docker network inspect newest-cni-479871
	W1228 06:56:50.516872  260915 cli_runner.go:211] docker network inspect newest-cni-479871 returned with exit code 1
	I1228 06:56:50.516901  260915 network_create.go:287] error running [docker network inspect newest-cni-479871]: docker network inspect newest-cni-479871: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-479871 not found
	I1228 06:56:50.516925  260915 network_create.go:289] output of [docker network inspect newest-cni-479871]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-479871 not found
	
	** /stderr **
	I1228 06:56:50.517047  260915 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 06:56:50.535337  260915 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-83d3c063481b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ca:56:51:df:60:88} reservation:<nil>}
	I1228 06:56:50.536022  260915 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-94477def059b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:5a:82:84:46:ba:6c} reservation:<nil>}
	I1228 06:56:50.536725  260915 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-76f4b09d664b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9a:e7:39:af:62:68} reservation:<nil>}
	I1228 06:56:50.537233  260915 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4435fbd1d5af IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:56:c5:3b:23:f3:bc} reservation:<nil>}
	I1228 06:56:50.538018  260915 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ed5f10}
	I1228 06:56:50.538069  260915 network_create.go:124] attempt to create docker network newest-cni-479871 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1228 06:56:50.538139  260915 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-479871 newest-cni-479871
	I1228 06:56:50.590599  260915 network_create.go:108] docker network newest-cni-479871 192.168.85.0/24 created
	I1228 06:56:50.590626  260915 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-479871" container
	I1228 06:56:50.590684  260915 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1228 06:56:50.612756  260915 cli_runner.go:164] Run: docker volume create newest-cni-479871 --label name.minikube.sigs.k8s.io=newest-cni-479871 --label created_by.minikube.sigs.k8s.io=true
	I1228 06:56:50.632558  260915 oci.go:103] Successfully created a docker volume newest-cni-479871
	I1228 06:56:50.632647  260915 cli_runner.go:164] Run: docker run --rm --name newest-cni-479871-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-479871 --entrypoint /usr/bin/test -v newest-cni-479871:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -d /var/lib
	I1228 06:56:51.057547  260915 oci.go:107] Successfully prepared a docker volume newest-cni-479871
	I1228 06:56:51.057623  260915 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1228 06:56:51.057634  260915 kic.go:194] Starting extracting preloaded images to volume ...
	I1228 06:56:51.057688  260915 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22352-5550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-479871:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1228 06:56:54.002932  260915 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22352-5550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-479871:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -I lz4 -xf /preloaded.tar -C /extractDir: (2.945200662s)
	I1228 06:56:54.002968  260915 kic.go:203] duration metric: took 2.94532948s to extract preloaded images to volume ...
	W1228 06:56:54.003085  260915 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1228 06:56:54.003131  260915 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1228 06:56:54.003194  260915 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1228 06:56:54.071814  260915 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-479871 --name newest-cni-479871 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-479871 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-479871 --network newest-cni-479871 --ip 192.168.85.2 --volume newest-cni-479871:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1
	I1228 06:56:54.369279  260915 cli_runner.go:164] Run: docker container inspect newest-cni-479871 --format={{.State.Running}}
	I1228 06:56:54.388635  260915 cli_runner.go:164] Run: docker container inspect newest-cni-479871 --format={{.State.Status}}
	I1228 06:56:54.408312  260915 cli_runner.go:164] Run: docker exec newest-cni-479871 stat /var/lib/dpkg/alternatives/iptables
	I1228 06:56:54.458080  260915 oci.go:144] the created container "newest-cni-479871" has a running status.
	I1228 06:56:54.458112  260915 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22352-5550/.minikube/machines/newest-cni-479871/id_rsa...
	I1228 06:56:54.551688  260915 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22352-5550/.minikube/machines/newest-cni-479871/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1228 06:56:54.583285  260915 cli_runner.go:164] Run: docker container inspect newest-cni-479871 --format={{.State.Status}}
	I1228 06:56:54.607350  260915 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1228 06:56:54.607368  260915 kic_runner.go:114] Args: [docker exec --privileged newest-cni-479871 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1228 06:56:54.652142  260915 cli_runner.go:164] Run: docker container inspect newest-cni-479871 --format={{.State.Status}}
	I1228 06:56:54.681007  260915 machine.go:94] provisionDockerMachine start ...
	I1228 06:56:54.681235  260915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:56:54.705265  260915 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:54.705490  260915 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1228 06:56:54.705498  260915 main.go:144] libmachine: About to run SSH command:
	hostname
	I1228 06:56:54.841048  260915 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-479871
	
	I1228 06:56:54.841091  260915 ubuntu.go:182] provisioning hostname "newest-cni-479871"
	I1228 06:56:54.841152  260915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:56:54.860627  260915 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:54.860944  260915 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1228 06:56:54.860965  260915 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-479871 && echo "newest-cni-479871" | sudo tee /etc/hostname
	I1228 06:56:55.000794  260915 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-479871
	
	I1228 06:56:55.000873  260915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:56:55.023082  260915 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:55.023416  260915 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1228 06:56:55.023451  260915 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-479871' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-479871/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-479871' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1228 06:56:55.155462  260915 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1228 06:56:55.155487  260915 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22352-5550/.minikube CaCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22352-5550/.minikube}
	I1228 06:56:55.155505  260915 ubuntu.go:190] setting up certificates
	I1228 06:56:55.155516  260915 provision.go:84] configureAuth start
	I1228 06:56:55.155581  260915 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-479871
	I1228 06:56:55.175395  260915 provision.go:143] copyHostCerts
	I1228 06:56:55.175450  260915 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem, removing ...
	I1228 06:56:55.175460  260915 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem
	I1228 06:56:55.175531  260915 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem (1123 bytes)
	I1228 06:56:55.175657  260915 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem, removing ...
	I1228 06:56:55.175670  260915 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem
	I1228 06:56:55.175711  260915 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem (1679 bytes)
	I1228 06:56:55.175807  260915 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem, removing ...
	I1228 06:56:55.175819  260915 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem
	I1228 06:56:55.175860  260915 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem (1082 bytes)
	I1228 06:56:55.175997  260915 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem org=jenkins.newest-cni-479871 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-479871]
	I1228 06:56:55.234134  260915 provision.go:177] copyRemoteCerts
	I1228 06:56:55.234200  260915 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1228 06:56:55.234257  260915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:56:55.253397  260915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/newest-cni-479871/id_rsa Username:docker}
	I1228 06:56:54.168584  260283 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1228 06:56:54.168615  260283 machine.go:97] duration metric: took 4.09761028s to provisionDockerMachine
	I1228 06:56:54.168631  260283 start.go:293] postStartSetup for "embed-certs-422591" (driver="docker")
	I1228 06:56:54.168660  260283 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1228 06:56:54.168725  260283 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1228 06:56:54.168787  260283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422591
	I1228 06:56:54.192016  260283 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/embed-certs-422591/id_rsa Username:docker}
	I1228 06:56:54.304369  260283 ssh_runner.go:195] Run: cat /etc/os-release
	I1228 06:56:54.308295  260283 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1228 06:56:54.308330  260283 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1228 06:56:54.308342  260283 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-5550/.minikube/addons for local assets ...
	I1228 06:56:54.308408  260283 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-5550/.minikube/files for local assets ...
	I1228 06:56:54.308518  260283 filesync.go:149] local asset: /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem -> 90762.pem in /etc/ssl/certs
	I1228 06:56:54.308669  260283 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1228 06:56:54.316305  260283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem --> /etc/ssl/certs/90762.pem (1708 bytes)
	I1228 06:56:54.333546  260283 start.go:296] duration metric: took 164.900492ms for postStartSetup
	I1228 06:56:54.333638  260283 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 06:56:54.333685  260283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422591
	I1228 06:56:54.354220  260283 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/embed-certs-422591/id_rsa Username:docker}
	I1228 06:56:54.444937  260283 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1228 06:56:54.451873  260283 fix.go:56] duration metric: took 5.175283325s for fixHost
	I1228 06:56:54.451930  260283 start.go:83] releasing machines lock for "embed-certs-422591", held for 5.17534762s
	I1228 06:56:54.452000  260283 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-422591
	I1228 06:56:54.471600  260283 ssh_runner.go:195] Run: cat /version.json
	I1228 06:56:54.471642  260283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422591
	I1228 06:56:54.471728  260283 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1228 06:56:54.471811  260283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422591
	I1228 06:56:54.492447  260283 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/embed-certs-422591/id_rsa Username:docker}
	I1228 06:56:54.492692  260283 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/embed-certs-422591/id_rsa Username:docker}
	I1228 06:56:54.656519  260283 ssh_runner.go:195] Run: systemctl --version
	I1228 06:56:54.666648  260283 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1228 06:56:54.712845  260283 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1228 06:56:54.719909  260283 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1228 06:56:54.719980  260283 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1228 06:56:54.729922  260283 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1228 06:56:54.729983  260283 start.go:496] detecting cgroup driver to use...
	I1228 06:56:54.730019  260283 detect.go:190] detected "systemd" cgroup driver on host os
	I1228 06:56:54.730084  260283 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1228 06:56:54.745512  260283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1228 06:56:54.760533  260283 docker.go:218] disabling cri-docker service (if available) ...
	I1228 06:56:54.760588  260283 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1228 06:56:54.776631  260283 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1228 06:56:54.789719  260283 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1228 06:56:54.887189  260283 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1228 06:56:54.981826  260283 docker.go:234] disabling docker service ...
	I1228 06:56:54.981900  260283 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1228 06:56:55.001365  260283 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1228 06:56:55.016902  260283 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1228 06:56:55.113674  260283 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1228 06:56:55.201172  260283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1228 06:56:55.213948  260283 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 06:56:55.229743  260283 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1228 06:56:55.229795  260283 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:55.238954  260283 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1228 06:56:55.239021  260283 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:55.248040  260283 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:55.257595  260283 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:55.266670  260283 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1228 06:56:55.275055  260283 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:55.284080  260283 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:55.292518  260283 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:55.301093  260283 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1228 06:56:55.308817  260283 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1228 06:56:55.316372  260283 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:56:55.403600  260283 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1228 06:56:55.536797  260283 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1228 06:56:55.536860  260283 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1228 06:56:55.541349  260283 start.go:574] Will wait 60s for crictl version
	I1228 06:56:55.541437  260283 ssh_runner.go:195] Run: which crictl
	I1228 06:56:55.544932  260283 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1228 06:56:55.573996  260283 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1228 06:56:55.574084  260283 ssh_runner.go:195] Run: crio --version
	I1228 06:56:55.603216  260283 ssh_runner.go:195] Run: crio --version
	I1228 06:56:55.635699  260283 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1228 06:56:51.528193  261568 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-500581" ...
	I1228 06:56:51.528256  261568 cli_runner.go:164] Run: docker start default-k8s-diff-port-500581
	I1228 06:56:51.794281  261568 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-500581 --format={{.State.Status}}
	I1228 06:56:51.813604  261568 kic.go:430] container "default-k8s-diff-port-500581" state is running.
	I1228 06:56:51.813999  261568 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-500581
	I1228 06:56:51.836391  261568 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/default-k8s-diff-port-500581/config.json ...
	I1228 06:56:51.836675  261568 machine.go:94] provisionDockerMachine start ...
	I1228 06:56:51.836769  261568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-500581
	I1228 06:56:51.856837  261568 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:51.857168  261568 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1228 06:56:51.857185  261568 main.go:144] libmachine: About to run SSH command:
	hostname
	I1228 06:56:51.857850  261568 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56468->127.0.0.1:33088: read: connection reset by peer
	I1228 06:56:54.989220  261568 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-500581
	
	I1228 06:56:54.989252  261568 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-500581"
	I1228 06:56:54.989314  261568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-500581
	I1228 06:56:55.011189  261568 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:55.011424  261568 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1228 06:56:55.011443  261568 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-500581 && echo "default-k8s-diff-port-500581" | sudo tee /etc/hostname
	I1228 06:56:55.160703  261568 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-500581
	
	I1228 06:56:55.160788  261568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-500581
	I1228 06:56:55.180898  261568 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:55.181227  261568 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1228 06:56:55.181257  261568 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-500581' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-500581/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-500581' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1228 06:56:55.307110  261568 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1228 06:56:55.307133  261568 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22352-5550/.minikube CaCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22352-5550/.minikube}
	I1228 06:56:55.307155  261568 ubuntu.go:190] setting up certificates
	I1228 06:56:55.307172  261568 provision.go:84] configureAuth start
	I1228 06:56:55.307219  261568 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-500581
	I1228 06:56:55.326689  261568 provision.go:143] copyHostCerts
	I1228 06:56:55.326750  261568 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem, removing ...
	I1228 06:56:55.326761  261568 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem
	I1228 06:56:55.326811  261568 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem (1123 bytes)
	I1228 06:56:55.326966  261568 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem, removing ...
	I1228 06:56:55.326979  261568 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem
	I1228 06:56:55.327002  261568 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem (1679 bytes)
	I1228 06:56:55.327100  261568 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem, removing ...
	I1228 06:56:55.327110  261568 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem
	I1228 06:56:55.327132  261568 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem (1082 bytes)
	I1228 06:56:55.327202  261568 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-500581 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-500581 localhost minikube]
	I1228 06:56:55.373177  261568 provision.go:177] copyRemoteCerts
	I1228 06:56:55.373236  261568 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1228 06:56:55.373295  261568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-500581
	I1228 06:56:55.392900  261568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/default-k8s-diff-port-500581/id_rsa Username:docker}
	I1228 06:56:55.486399  261568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1228 06:56:55.505187  261568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1228 06:56:55.522853  261568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1228 06:56:55.540417  261568 provision.go:87] duration metric: took 233.223896ms to configureAuth
	I1228 06:56:55.540444  261568 ubuntu.go:206] setting minikube options for container-runtime
	I1228 06:56:55.540674  261568 config.go:182] Loaded profile config "default-k8s-diff-port-500581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:56:55.540784  261568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-500581
	I1228 06:56:55.560885  261568 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:55.561205  261568 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1228 06:56:55.561248  261568 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1228 06:56:55.912261  261568 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1228 06:56:55.912292  261568 machine.go:97] duration metric: took 4.075596904s to provisionDockerMachine
	I1228 06:56:55.912309  261568 start.go:293] postStartSetup for "default-k8s-diff-port-500581" (driver="docker")
	I1228 06:56:55.912323  261568 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1228 06:56:55.912405  261568 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1228 06:56:55.912473  261568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-500581
	I1228 06:56:55.934789  261568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/default-k8s-diff-port-500581/id_rsa Username:docker}
	I1228 06:56:56.028978  261568 ssh_runner.go:195] Run: cat /etc/os-release
	I1228 06:56:56.033725  261568 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1228 06:56:56.033788  261568 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1228 06:56:56.033803  261568 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-5550/.minikube/addons for local assets ...
	I1228 06:56:56.033860  261568 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-5550/.minikube/files for local assets ...
	I1228 06:56:56.033970  261568 filesync.go:149] local asset: /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem -> 90762.pem in /etc/ssl/certs
	I1228 06:56:56.034118  261568 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1228 06:56:56.043909  261568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem --> /etc/ssl/certs/90762.pem (1708 bytes)
	I1228 06:56:56.068426  261568 start.go:296] duration metric: took 156.102069ms for postStartSetup
	I1228 06:56:56.068509  261568 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 06:56:56.068568  261568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-500581
	I1228 06:56:56.094504  261568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/default-k8s-diff-port-500581/id_rsa Username:docker}
	I1228 06:56:56.186274  261568 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1228 06:56:56.192245  261568 fix.go:56] duration metric: took 4.684422638s for fixHost
	I1228 06:56:56.192269  261568 start.go:83] releasing machines lock for "default-k8s-diff-port-500581", held for 4.684465564s
	I1228 06:56:56.192339  261568 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-500581
	I1228 06:56:56.215984  261568 ssh_runner.go:195] Run: cat /version.json
	I1228 06:56:56.216056  261568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-500581
	I1228 06:56:56.216085  261568 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1228 06:56:56.216168  261568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-500581
	I1228 06:56:56.236830  261568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/default-k8s-diff-port-500581/id_rsa Username:docker}
	I1228 06:56:56.237219  261568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/default-k8s-diff-port-500581/id_rsa Username:docker}
	I1228 06:56:55.636809  260283 cli_runner.go:164] Run: docker network inspect embed-certs-422591 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 06:56:55.657292  260283 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1228 06:56:55.661351  260283 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 06:56:55.671982  260283 kubeadm.go:884] updating cluster {Name:embed-certs-422591 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-422591 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 06:56:55.672135  260283 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1228 06:56:55.672197  260283 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 06:56:55.717231  260283 crio.go:631] all images are preloaded for cri-o runtime.
	I1228 06:56:55.717252  260283 crio.go:503] Images already preloaded, skipping extraction
	I1228 06:56:55.717304  260283 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 06:56:55.750510  260283 crio.go:631] all images are preloaded for cri-o runtime.
	I1228 06:56:55.750537  260283 cache_images.go:86] Images are preloaded, skipping loading
	I1228 06:56:55.750545  260283 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I1228 06:56:55.750638  260283 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-422591 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:embed-certs-422591 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 06:56:55.750697  260283 ssh_runner.go:195] Run: crio config
	I1228 06:56:55.798757  260283 cni.go:84] Creating CNI manager for ""
	I1228 06:56:55.798781  260283 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1228 06:56:55.798794  260283 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1228 06:56:55.798816  260283 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-422591 NodeName:embed-certs-422591 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 06:56:55.798981  260283 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-422591"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 06:56:55.799071  260283 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1228 06:56:55.808067  260283 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 06:56:55.808139  260283 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 06:56:55.816236  260283 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1228 06:56:55.830081  260283 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 06:56:55.844082  260283 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1228 06:56:55.857168  260283 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1228 06:56:55.861349  260283 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 06:56:55.872967  260283 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:56:55.969484  260283 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:56:55.991172  260283 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/embed-certs-422591 for IP: 192.168.76.2
	I1228 06:56:55.991194  260283 certs.go:195] generating shared ca certs ...
	I1228 06:56:55.991213  260283 certs.go:227] acquiring lock for ca certs: {Name:mk77ee411d20e2d367f536371cb4debf1ce5f664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:55.991369  260283 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-5550/.minikube/ca.key
	I1228 06:56:55.991423  260283 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.key
	I1228 06:56:55.991435  260283 certs.go:257] generating profile certs ...
	I1228 06:56:55.991549  260283 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/embed-certs-422591/client.key
	I1228 06:56:55.991631  260283 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/embed-certs-422591/apiserver.key.3be22f86
	I1228 06:56:55.991682  260283 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/embed-certs-422591/proxy-client.key
	I1228 06:56:55.991823  260283 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076.pem (1338 bytes)
	W1228 06:56:55.991865  260283 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076_empty.pem, impossibly tiny 0 bytes
	I1228 06:56:55.991877  260283 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem (1675 bytes)
	I1228 06:56:55.991914  260283 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem (1082 bytes)
	I1228 06:56:55.991950  260283 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem (1123 bytes)
	I1228 06:56:55.991981  260283 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem (1679 bytes)
	I1228 06:56:55.992051  260283 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem (1708 bytes)
	I1228 06:56:55.992737  260283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 06:56:56.012567  260283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1228 06:56:56.034343  260283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 06:56:56.057165  260283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 06:56:56.079350  260283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/embed-certs-422591/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1228 06:56:56.103893  260283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/embed-certs-422591/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1228 06:56:56.123746  260283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/embed-certs-422591/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 06:56:56.141940  260283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/embed-certs-422591/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1228 06:56:56.160463  260283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem --> /usr/share/ca-certificates/90762.pem (1708 bytes)
	I1228 06:56:56.177728  260283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 06:56:56.199019  260283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076.pem --> /usr/share/ca-certificates/9076.pem (1338 bytes)
	I1228 06:56:56.220395  260283 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 06:56:56.235535  260283 ssh_runner.go:195] Run: openssl version
	I1228 06:56:56.242495  260283 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/90762.pem
	I1228 06:56:56.250951  260283 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/90762.pem /etc/ssl/certs/90762.pem
	I1228 06:56:56.260106  260283 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/90762.pem
	I1228 06:56:56.264522  260283 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:31 /usr/share/ca-certificates/90762.pem
	I1228 06:56:56.264582  260283 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/90762.pem
	I1228 06:56:56.302672  260283 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 06:56:56.310442  260283 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:56.318190  260283 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 06:56:56.326937  260283 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:56.330782  260283 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:28 /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:56.330838  260283 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:56.366947  260283 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 06:56:56.374588  260283 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9076.pem
	I1228 06:56:56.382855  260283 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9076.pem /etc/ssl/certs/9076.pem
	I1228 06:56:56.392178  260283 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9076.pem
	I1228 06:56:56.400669  260283 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:31 /usr/share/ca-certificates/9076.pem
	I1228 06:56:56.400781  260283 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9076.pem
	I1228 06:56:56.443361  260283 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 06:56:56.451380  260283 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 06:56:56.455260  260283 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1228 06:56:56.493195  260283 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1228 06:56:56.552322  260283 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1228 06:56:56.610967  260283 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1228 06:56:56.678082  260283 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1228 06:56:56.744904  260283 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1228 06:56:56.802976  260283 kubeadm.go:401] StartCluster: {Name:embed-certs-422591 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-422591 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:56:56.803131  260283 ssh_runner.go:195] Run: sudo crio config
	I1228 06:56:56.887317  260283 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	W1228 06:56:56.902690  260283 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:56Z" level=error msg="open /run/runc: no such file or directory"
	I1228 06:56:56.902780  260283 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 06:56:56.911889  260283 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1228 06:56:56.911919  260283 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1228 06:56:56.911966  260283 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1228 06:56:56.921385  260283 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1228 06:56:56.922175  260283 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-422591" does not appear in /home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:56:56.922628  260283 kubeconfig.go:62] /home/jenkins/minikube-integration/22352-5550/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-422591" cluster setting kubeconfig missing "embed-certs-422591" context setting]
	I1228 06:56:56.923248  260283 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/kubeconfig: {Name:mke0b45a06b272ee5aa493b62cdbe6f2a53c0aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:56.924994  260283 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1228 06:56:56.935152  260283 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1228 06:56:56.935190  260283 kubeadm.go:602] duration metric: took 23.263516ms to restartPrimaryControlPlane
	I1228 06:56:56.935207  260283 kubeadm.go:403] duration metric: took 132.238201ms to StartCluster
	I1228 06:56:56.935226  260283 settings.go:142] acquiring lock: {Name:mk84c1d4c127eaf11c7cc5cc16de86f86f2dcbe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:56.935306  260283 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:56:56.936685  260283 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/kubeconfig: {Name:mke0b45a06b272ee5aa493b62cdbe6f2a53c0aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:56.936960  260283 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1228 06:56:56.937200  260283 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1228 06:56:56.937287  260283 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-422591"
	I1228 06:56:56.937304  260283 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-422591"
	W1228 06:56:56.937311  260283 addons.go:248] addon storage-provisioner should already be in state true
	I1228 06:56:56.937316  260283 config.go:182] Loaded profile config "embed-certs-422591": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:56:56.937338  260283 host.go:66] Checking if "embed-certs-422591" exists ...
	I1228 06:56:56.937426  260283 addons.go:70] Setting default-storageclass=true in profile "embed-certs-422591"
	I1228 06:56:56.937441  260283 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-422591"
	I1228 06:56:56.937706  260283 cli_runner.go:164] Run: docker container inspect embed-certs-422591 --format={{.State.Status}}
	I1228 06:56:56.937808  260283 cli_runner.go:164] Run: docker container inspect embed-certs-422591 --format={{.State.Status}}
	I1228 06:56:56.937839  260283 addons.go:70] Setting dashboard=true in profile "embed-certs-422591"
	I1228 06:56:56.937859  260283 addons.go:239] Setting addon dashboard=true in "embed-certs-422591"
	W1228 06:56:56.937868  260283 addons.go:248] addon dashboard should already be in state true
	I1228 06:56:56.937892  260283 host.go:66] Checking if "embed-certs-422591" exists ...
	I1228 06:56:56.938390  260283 cli_runner.go:164] Run: docker container inspect embed-certs-422591 --format={{.State.Status}}
	I1228 06:56:56.939110  260283 out.go:179] * Verifying Kubernetes components...
	I1228 06:56:56.940441  260283 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:56:56.975612  260283 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1228 06:56:56.976794  260283 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:56:56.976818  260283 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1228 06:56:56.976876  260283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422591
	I1228 06:56:56.981192  260283 addons.go:239] Setting addon default-storageclass=true in "embed-certs-422591"
	W1228 06:56:56.981219  260283 addons.go:248] addon default-storageclass should already be in state true
	I1228 06:56:56.981245  260283 host.go:66] Checking if "embed-certs-422591" exists ...
	I1228 06:56:56.981694  260283 cli_runner.go:164] Run: docker container inspect embed-certs-422591 --format={{.State.Status}}
	I1228 06:56:56.982695  260283 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1228 06:56:56.984042  260283 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1228 06:56:55.347118  260915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1228 06:56:55.367880  260915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1228 06:56:55.385829  260915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1228 06:56:55.405574  260915 provision.go:87] duration metric: took 250.043655ms to configureAuth
	I1228 06:56:55.405599  260915 ubuntu.go:206] setting minikube options for container-runtime
	I1228 06:56:55.405793  260915 config.go:182] Loaded profile config "newest-cni-479871": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:56:55.405923  260915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:56:55.426557  260915 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:55.426761  260915 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1228 06:56:55.426777  260915 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1228 06:56:55.707096  260915 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1228 06:56:55.707127  260915 machine.go:97] duration metric: took 1.025985439s to provisionDockerMachine
	I1228 06:56:55.707141  260915 client.go:176] duration metric: took 5.226772639s to LocalClient.Create
	I1228 06:56:55.707163  260915 start.go:167] duration metric: took 5.226863018s to libmachine.API.Create "newest-cni-479871"
	I1228 06:56:55.707178  260915 start.go:293] postStartSetup for "newest-cni-479871" (driver="docker")
	I1228 06:56:55.707191  260915 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1228 06:56:55.707328  260915 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1228 06:56:55.707387  260915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:56:55.730590  260915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/newest-cni-479871/id_rsa Username:docker}
	I1228 06:56:55.828324  260915 ssh_runner.go:195] Run: cat /etc/os-release
	I1228 06:56:55.832265  260915 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1228 06:56:55.832288  260915 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1228 06:56:55.832299  260915 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-5550/.minikube/addons for local assets ...
	I1228 06:56:55.832350  260915 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-5550/.minikube/files for local assets ...
	I1228 06:56:55.832419  260915 filesync.go:149] local asset: /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem -> 90762.pem in /etc/ssl/certs
	I1228 06:56:55.832512  260915 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1228 06:56:55.839863  260915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem --> /etc/ssl/certs/90762.pem (1708 bytes)
	I1228 06:56:55.861613  260915 start.go:296] duration metric: took 154.42382ms for postStartSetup
	I1228 06:56:55.861983  260915 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-479871
	I1228 06:56:55.882165  260915 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/config.json ...
	I1228 06:56:55.882431  260915 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 06:56:55.882487  260915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:56:55.907110  260915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/newest-cni-479871/id_rsa Username:docker}
	I1228 06:56:56.002055  260915 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1228 06:56:56.007480  260915 start.go:128] duration metric: took 5.529512048s to createHost
	I1228 06:56:56.007505  260915 start.go:83] releasing machines lock for "newest-cni-479871", held for 5.529670542s
	I1228 06:56:56.007573  260915 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-479871
	I1228 06:56:56.029672  260915 ssh_runner.go:195] Run: cat /version.json
	I1228 06:56:56.029705  260915 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1228 06:56:56.029725  260915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:56:56.029776  260915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:56:56.055251  260915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/newest-cni-479871/id_rsa Username:docker}
	I1228 06:56:56.056879  260915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/newest-cni-479871/id_rsa Username:docker}
	I1228 06:56:56.223588  260915 ssh_runner.go:195] Run: systemctl --version
	I1228 06:56:56.231474  260915 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1228 06:56:56.270713  260915 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1228 06:56:56.275245  260915 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1228 06:56:56.275311  260915 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1228 06:56:56.303121  260915 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1228 06:56:56.303143  260915 start.go:496] detecting cgroup driver to use...
	I1228 06:56:56.303180  260915 detect.go:190] detected "systemd" cgroup driver on host os
	I1228 06:56:56.303231  260915 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1228 06:56:56.319367  260915 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1228 06:56:56.332383  260915 docker.go:218] disabling cri-docker service (if available) ...
	I1228 06:56:56.332437  260915 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1228 06:56:56.349611  260915 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1228 06:56:56.366740  260915 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1228 06:56:56.458933  260915 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1228 06:56:56.581970  260915 docker.go:234] disabling docker service ...
	I1228 06:56:56.582057  260915 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1228 06:56:56.611636  260915 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1228 06:56:56.629973  260915 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1228 06:56:56.778762  260915 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1228 06:56:56.898948  260915 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1228 06:56:56.915292  260915 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 06:56:56.936739  260915 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1228 06:56:56.936802  260915 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:56.957436  260915 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1228 06:56:56.957511  260915 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:56.970285  260915 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:56.991323  260915 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:57.012351  260915 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1228 06:56:57.030720  260915 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:57.044267  260915 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:57.063444  260915 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:57.076260  260915 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1228 06:56:57.086701  260915 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1228 06:56:57.094844  260915 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:56:57.197445  260915 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1228 06:56:57.376208  260915 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1228 06:56:57.376288  260915 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1228 06:56:57.381285  260915 start.go:574] Will wait 60s for crictl version
	I1228 06:56:57.381333  260915 ssh_runner.go:195] Run: which crictl
	I1228 06:56:57.386277  260915 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1228 06:56:57.416647  260915 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1228 06:56:57.416739  260915 ssh_runner.go:195] Run: crio --version
	I1228 06:56:57.451001  260915 ssh_runner.go:195] Run: crio --version
	I1228 06:56:57.487677  260915 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1228 06:56:57.488839  260915 cli_runner.go:164] Run: docker network inspect newest-cni-479871 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 06:56:57.510156  260915 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1228 06:56:57.515131  260915 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 06:56:57.529473  260915 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1228 06:56:56.380475  261568 ssh_runner.go:195] Run: systemctl --version
	I1228 06:56:56.388512  261568 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1228 06:56:56.432498  261568 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1228 06:56:56.437345  261568 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1228 06:56:56.437405  261568 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1228 06:56:56.445717  261568 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1228 06:56:56.445738  261568 start.go:496] detecting cgroup driver to use...
	I1228 06:56:56.445770  261568 detect.go:190] detected "systemd" cgroup driver on host os
	I1228 06:56:56.445818  261568 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1228 06:56:56.460887  261568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1228 06:56:56.472988  261568 docker.go:218] disabling cri-docker service (if available) ...
	I1228 06:56:56.473075  261568 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1228 06:56:56.488438  261568 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1228 06:56:56.505894  261568 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1228 06:56:56.621379  261568 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1228 06:56:56.764198  261568 docker.go:234] disabling docker service ...
	I1228 06:56:56.764262  261568 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1228 06:56:56.784627  261568 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1228 06:56:56.801487  261568 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1228 06:56:56.935018  261568 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1228 06:56:57.099832  261568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1228 06:56:57.114590  261568 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 06:56:57.138584  261568 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1228 06:56:57.138648  261568 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:57.149353  261568 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1228 06:56:57.149428  261568 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:57.160151  261568 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:57.171588  261568 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:57.182489  261568 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1228 06:56:57.193579  261568 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:57.206803  261568 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:57.219708  261568 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:57.230493  261568 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1228 06:56:57.241799  261568 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1228 06:56:57.254056  261568 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:56:57.353683  261568 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1228 06:56:57.510586  261568 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1228 06:56:57.510663  261568 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1228 06:56:57.515591  261568 start.go:574] Will wait 60s for crictl version
	I1228 06:56:57.515660  261568 ssh_runner.go:195] Run: which crictl
	I1228 06:56:57.520214  261568 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1228 06:56:57.552121  261568 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1228 06:56:57.552210  261568 ssh_runner.go:195] Run: crio --version
	I1228 06:56:57.588059  261568 ssh_runner.go:195] Run: crio --version
	I1228 06:56:57.633785  261568 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	W1228 06:56:55.687228  252331 pod_ready.go:104] pod "coredns-7d764666f9-npk6g" is not "Ready", error: <nil>
	I1228 06:56:56.187608  252331 pod_ready.go:94] pod "coredns-7d764666f9-npk6g" is "Ready"
	I1228 06:56:56.187639  252331 pod_ready.go:86] duration metric: took 35.50648982s for pod "coredns-7d764666f9-npk6g" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:56.190301  252331 pod_ready.go:83] waiting for pod "etcd-no-preload-950460" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:56.194625  252331 pod_ready.go:94] pod "etcd-no-preload-950460" is "Ready"
	I1228 06:56:56.194650  252331 pod_ready.go:86] duration metric: took 4.324521ms for pod "etcd-no-preload-950460" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:56.196770  252331 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-950460" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:56.200996  252331 pod_ready.go:94] pod "kube-apiserver-no-preload-950460" is "Ready"
	I1228 06:56:56.201021  252331 pod_ready.go:86] duration metric: took 4.22637ms for pod "kube-apiserver-no-preload-950460" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:56.203067  252331 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-950460" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:56.386984  252331 pod_ready.go:94] pod "kube-controller-manager-no-preload-950460" is "Ready"
	I1228 06:56:56.387016  252331 pod_ready.go:86] duration metric: took 183.928403ms for pod "kube-controller-manager-no-preload-950460" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:56.586132  252331 pod_ready.go:83] waiting for pod "kube-proxy-294rn" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:56.998562  252331 pod_ready.go:94] pod "kube-proxy-294rn" is "Ready"
	I1228 06:56:56.998589  252331 pod_ready.go:86] duration metric: took 412.431002ms for pod "kube-proxy-294rn" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:57.186108  252331 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-950460" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:57.585825  252331 pod_ready.go:94] pod "kube-scheduler-no-preload-950460" is "Ready"
	I1228 06:56:57.585854  252331 pod_ready.go:86] duration metric: took 399.717455ms for pod "kube-scheduler-no-preload-950460" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:57.585870  252331 pod_ready.go:40] duration metric: took 36.908067526s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 06:56:57.640725  252331 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1228 06:56:57.643532  252331 out.go:179] * Done! kubectl is now configured to use "no-preload-950460" cluster and "default" namespace by default
	I1228 06:56:56.986006  260283 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1228 06:56:56.986182  260283 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1228 06:56:56.986292  260283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422591
	I1228 06:56:57.021254  260283 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/embed-certs-422591/id_rsa Username:docker}
	I1228 06:56:57.026470  260283 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1228 06:56:57.026498  260283 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1228 06:56:57.026561  260283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422591
	I1228 06:56:57.032342  260283 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/embed-certs-422591/id_rsa Username:docker}
	I1228 06:56:57.052347  260283 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/embed-certs-422591/id_rsa Username:docker}
	I1228 06:56:57.114798  260283 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:56:57.136438  260283 node_ready.go:35] waiting up to 6m0s for node "embed-certs-422591" to be "Ready" ...
	I1228 06:56:57.143093  260283 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:56:57.146367  260283 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1228 06:56:57.146437  260283 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1228 06:56:57.159893  260283 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1228 06:56:57.162997  260283 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1228 06:56:57.163021  260283 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1228 06:56:57.178802  260283 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1228 06:56:57.178824  260283 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1228 06:56:57.195442  260283 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1228 06:56:57.195462  260283 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1228 06:56:57.215683  260283 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1228 06:56:57.215712  260283 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1228 06:56:57.234390  260283 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1228 06:56:57.234464  260283 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1228 06:56:57.250624  260283 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1228 06:56:57.250659  260283 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1228 06:56:57.269371  260283 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1228 06:56:57.269405  260283 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1228 06:56:57.287286  260283 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 06:56:57.287318  260283 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1228 06:56:57.303823  260283 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 06:56:58.341277  260283 node_ready.go:49] node "embed-certs-422591" is "Ready"
	I1228 06:56:58.341486  260283 node_ready.go:38] duration metric: took 1.204996046s for node "embed-certs-422591" to be "Ready" ...
	I1228 06:56:58.341543  260283 api_server.go:52] waiting for apiserver process to appear ...
	I1228 06:56:58.341625  260283 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 06:56:59.079200  260283 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.9360724s)
	I1228 06:56:59.079284  260283 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.919364076s)
	I1228 06:56:59.079836  260283 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.77597476s)
	I1228 06:56:59.079928  260283 api_server.go:72] duration metric: took 2.142935627s to wait for apiserver process to appear ...
	I1228 06:56:59.080185  260283 api_server.go:88] waiting for apiserver healthz status ...
	I1228 06:56:59.080283  260283 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1228 06:56:59.081622  260283 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-422591 addons enable metrics-server
	
	I1228 06:56:59.086704  260283 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1228 06:56:59.086730  260283 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1228 06:56:59.096150  260283 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1228 06:56:57.634878  261568 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-500581 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 06:56:57.656475  261568 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1228 06:56:57.662868  261568 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 06:56:57.680225  261568 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-500581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-500581 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 06:56:57.680387  261568 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1228 06:56:57.680441  261568 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 06:56:57.725731  261568 crio.go:631] all images are preloaded for cri-o runtime.
	I1228 06:56:57.725752  261568 crio.go:503] Images already preloaded, skipping extraction
	I1228 06:56:57.725791  261568 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 06:56:57.758843  261568 crio.go:631] all images are preloaded for cri-o runtime.
	I1228 06:56:57.758867  261568 cache_images.go:86] Images are preloaded, skipping loading
	I1228 06:56:57.758878  261568 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.35.0 crio true true} ...
	I1228 06:56:57.759067  261568 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-500581 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-500581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 06:56:57.759165  261568 ssh_runner.go:195] Run: crio config
	I1228 06:56:57.825229  261568 cni.go:84] Creating CNI manager for ""
	I1228 06:56:57.825249  261568 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1228 06:56:57.825263  261568 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1228 06:56:57.825283  261568 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-500581 NodeName:default-k8s-diff-port-500581 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 06:56:57.825427  261568 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-500581"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 06:56:57.825488  261568 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1228 06:56:57.834015  261568 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 06:56:57.834104  261568 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 06:56:57.842957  261568 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1228 06:56:57.861130  261568 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 06:56:57.875931  261568 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1228 06:56:57.890937  261568 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1228 06:56:57.894724  261568 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 06:56:57.904606  261568 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:56:58.027677  261568 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:56:58.050675  261568 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/default-k8s-diff-port-500581 for IP: 192.168.103.2
	I1228 06:56:58.050696  261568 certs.go:195] generating shared ca certs ...
	I1228 06:56:58.050715  261568 certs.go:227] acquiring lock for ca certs: {Name:mk77ee411d20e2d367f536371cb4debf1ce5f664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:58.050893  261568 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-5550/.minikube/ca.key
	I1228 06:56:58.050947  261568 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.key
	I1228 06:56:58.050958  261568 certs.go:257] generating profile certs ...
	I1228 06:56:58.051080  261568 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/default-k8s-diff-port-500581/client.key
	I1228 06:56:58.051160  261568 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/default-k8s-diff-port-500581/apiserver.key.4e0fc9ea
	I1228 06:56:58.051212  261568 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/default-k8s-diff-port-500581/proxy-client.key
	I1228 06:56:58.051319  261568 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076.pem (1338 bytes)
	W1228 06:56:58.051361  261568 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076_empty.pem, impossibly tiny 0 bytes
	I1228 06:56:58.051375  261568 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem (1675 bytes)
	I1228 06:56:58.051416  261568 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem (1082 bytes)
	I1228 06:56:58.051453  261568 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem (1123 bytes)
	I1228 06:56:58.051491  261568 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem (1679 bytes)
	I1228 06:56:58.051540  261568 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem (1708 bytes)
	I1228 06:56:58.052173  261568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 06:56:58.074301  261568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1228 06:56:58.094763  261568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 06:56:58.114646  261568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 06:56:58.151474  261568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/default-k8s-diff-port-500581/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1228 06:56:58.178111  261568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/default-k8s-diff-port-500581/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1228 06:56:58.196129  261568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/default-k8s-diff-port-500581/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 06:56:58.225303  261568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/default-k8s-diff-port-500581/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1228 06:56:58.252987  261568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076.pem --> /usr/share/ca-certificates/9076.pem (1338 bytes)
	I1228 06:56:58.275157  261568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem --> /usr/share/ca-certificates/90762.pem (1708 bytes)
	I1228 06:56:58.292772  261568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 06:56:58.324117  261568 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 06:56:58.344196  261568 ssh_runner.go:195] Run: openssl version
	I1228 06:56:58.359329  261568 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9076.pem
	I1228 06:56:58.373180  261568 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9076.pem /etc/ssl/certs/9076.pem
	I1228 06:56:58.388547  261568 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9076.pem
	I1228 06:56:58.397646  261568 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:31 /usr/share/ca-certificates/9076.pem
	I1228 06:56:58.397716  261568 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9076.pem
	I1228 06:56:58.463000  261568 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 06:56:58.472957  261568 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/90762.pem
	I1228 06:56:58.482337  261568 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/90762.pem /etc/ssl/certs/90762.pem
	I1228 06:56:58.493234  261568 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/90762.pem
	I1228 06:56:58.497494  261568 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:31 /usr/share/ca-certificates/90762.pem
	I1228 06:56:58.497554  261568 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/90762.pem
	I1228 06:56:58.554499  261568 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 06:56:58.563535  261568 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:58.571433  261568 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 06:56:58.580593  261568 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:58.586440  261568 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:28 /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:58.586531  261568 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:58.645335  261568 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 06:56:58.658570  261568 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 06:56:58.664780  261568 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1228 06:56:58.731559  261568 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1228 06:56:58.794292  261568 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1228 06:56:58.854366  261568 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1228 06:56:58.912352  261568 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1228 06:56:58.971537  261568 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1228 06:56:59.020042  261568 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-500581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-500581 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:56:59.020173  261568 ssh_runner.go:195] Run: sudo crio config
	I1228 06:56:59.077797  261568 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	W1228 06:56:59.092934  261568 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:59Z" level=error msg="open /run/runc: no such file or directory"
	I1228 06:56:59.093006  261568 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 06:56:59.104271  261568 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1228 06:56:59.104290  261568 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1228 06:56:59.104344  261568 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1228 06:56:59.114137  261568 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1228 06:56:59.115134  261568 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-500581" does not appear in /home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:56:59.115666  261568 kubeconfig.go:62] /home/jenkins/minikube-integration/22352-5550/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-500581" cluster setting kubeconfig missing "default-k8s-diff-port-500581" context setting]
	I1228 06:56:59.116519  261568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/kubeconfig: {Name:mke0b45a06b272ee5aa493b62cdbe6f2a53c0aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:59.118500  261568 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1228 06:56:59.129715  261568 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1228 06:56:59.129755  261568 kubeadm.go:602] duration metric: took 25.457297ms to restartPrimaryControlPlane
	I1228 06:56:59.129767  261568 kubeadm.go:403] duration metric: took 109.746452ms to StartCluster
	I1228 06:56:59.129787  261568 settings.go:142] acquiring lock: {Name:mk84c1d4c127eaf11c7cc5cc16de86f86f2dcbe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:59.129865  261568 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:56:59.131990  261568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/kubeconfig: {Name:mke0b45a06b272ee5aa493b62cdbe6f2a53c0aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:59.132237  261568 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1228 06:56:59.132306  261568 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1228 06:56:59.132422  261568 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-500581"
	I1228 06:56:59.132442  261568 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-500581"
	I1228 06:56:59.132440  261568 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-500581"
	I1228 06:56:59.132458  261568 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-500581"
	I1228 06:56:59.132466  261568 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-500581"
	I1228 06:56:59.132472  261568 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-500581"
	I1228 06:56:59.132501  261568 config.go:182] Loaded profile config "default-k8s-diff-port-500581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	W1228 06:56:59.132476  261568 addons.go:248] addon dashboard should already be in state true
	I1228 06:56:59.132606  261568 host.go:66] Checking if "default-k8s-diff-port-500581" exists ...
	W1228 06:56:59.132451  261568 addons.go:248] addon storage-provisioner should already be in state true
	I1228 06:56:59.132643  261568 host.go:66] Checking if "default-k8s-diff-port-500581" exists ...
	I1228 06:56:59.132804  261568 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-500581 --format={{.State.Status}}
	I1228 06:56:59.133076  261568 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-500581 --format={{.State.Status}}
	I1228 06:56:59.133196  261568 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-500581 --format={{.State.Status}}
	I1228 06:56:59.134412  261568 out.go:179] * Verifying Kubernetes components...
	I1228 06:56:59.135423  261568 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:56:59.160990  261568 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-500581"
	W1228 06:56:59.161019  261568 addons.go:248] addon default-storageclass should already be in state true
	I1228 06:56:59.161062  261568 host.go:66] Checking if "default-k8s-diff-port-500581" exists ...
	I1228 06:56:59.161632  261568 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-500581 --format={{.State.Status}}
	I1228 06:56:59.164387  261568 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1228 06:56:59.164457  261568 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1228 06:56:59.165776  261568 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:56:59.165796  261568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1228 06:56:59.165854  261568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-500581
	I1228 06:56:59.166051  261568 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1228 06:56:57.530689  260915 kubeadm.go:884] updating cluster {Name:newest-cni-479871 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-479871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 06:56:57.530879  260915 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1228 06:56:57.530955  260915 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 06:56:57.573400  260915 crio.go:631] all images are preloaded for cri-o runtime.
	I1228 06:56:57.573424  260915 crio.go:503] Images already preloaded, skipping extraction
	I1228 06:56:57.573472  260915 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 06:56:57.605727  260915 crio.go:631] all images are preloaded for cri-o runtime.
	I1228 06:56:57.605749  260915 cache_images.go:86] Images are preloaded, skipping loading
	I1228 06:56:57.605756  260915 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I1228 06:56:57.605895  260915 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-479871 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-479871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 06:56:57.605982  260915 ssh_runner.go:195] Run: crio config
	I1228 06:56:57.674056  260915 cni.go:84] Creating CNI manager for ""
	I1228 06:56:57.674080  260915 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1228 06:56:57.674097  260915 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1228 06:56:57.674130  260915 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-479871 NodeName:newest-cni-479871 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 06:56:57.674294  260915 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-479871"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 06:56:57.674363  260915 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1228 06:56:57.683718  260915 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 06:56:57.683774  260915 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 06:56:57.697208  260915 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1228 06:56:57.714193  260915 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 06:56:57.736019  260915 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1228 06:56:57.752347  260915 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1228 06:56:57.757444  260915 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 06:56:57.770946  260915 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:56:57.879994  260915 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:56:57.907780  260915 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871 for IP: 192.168.85.2
	I1228 06:56:57.907815  260915 certs.go:195] generating shared ca certs ...
	I1228 06:56:57.907835  260915 certs.go:227] acquiring lock for ca certs: {Name:mk77ee411d20e2d367f536371cb4debf1ce5f664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:57.907990  260915 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-5550/.minikube/ca.key
	I1228 06:56:57.908075  260915 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.key
	I1228 06:56:57.908095  260915 certs.go:257] generating profile certs ...
	I1228 06:56:57.908171  260915 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/client.key
	I1228 06:56:57.908190  260915 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/client.crt with IP's: []
	I1228 06:56:57.970315  260915 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/client.crt ...
	I1228 06:56:57.970351  260915 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/client.crt: {Name:mk342ba4e76ceae6509b3a9b3e06bce76a0143fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:57.970558  260915 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/client.key ...
	I1228 06:56:57.970573  260915 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/client.key: {Name:mk6097687692feb30b71900aa35b4aee9faa2acb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:57.970713  260915 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/apiserver.key.37bd9581
	I1228 06:56:57.970751  260915 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/apiserver.crt.37bd9581 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1228 06:56:58.015745  260915 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/apiserver.crt.37bd9581 ...
	I1228 06:56:58.015774  260915 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/apiserver.crt.37bd9581: {Name:mk60335156a565fa5df02e2632a77039efa4fc0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:58.015954  260915 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/apiserver.key.37bd9581 ...
	I1228 06:56:58.015970  260915 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/apiserver.key.37bd9581: {Name:mk63edb29b1d00cff7e6d926b73407d8754bf39a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:58.016080  260915 certs.go:382] copying /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/apiserver.crt.37bd9581 -> /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/apiserver.crt
	I1228 06:56:58.016188  260915 certs.go:386] copying /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/apiserver.key.37bd9581 -> /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/apiserver.key
	I1228 06:56:58.016281  260915 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/proxy-client.key
	I1228 06:56:58.016305  260915 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/proxy-client.crt with IP's: []
	I1228 06:56:58.169217  260915 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/proxy-client.crt ...
	I1228 06:56:58.169306  260915 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/proxy-client.crt: {Name:mk5ba8b17c1f71db6636f0d33f2f72040423ed3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:58.169505  260915 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/proxy-client.key ...
	I1228 06:56:58.169521  260915 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/proxy-client.key: {Name:mk4b0b0f3f2c0acfd0e4e41f4c53c10301c4aab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:58.169760  260915 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076.pem (1338 bytes)
	W1228 06:56:58.169804  260915 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076_empty.pem, impossibly tiny 0 bytes
	I1228 06:56:58.169816  260915 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem (1675 bytes)
	I1228 06:56:58.169857  260915 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem (1082 bytes)
	I1228 06:56:58.169919  260915 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem (1123 bytes)
	I1228 06:56:58.169960  260915 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem (1679 bytes)
	I1228 06:56:58.170023  260915 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem (1708 bytes)
	I1228 06:56:58.170853  260915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 06:56:58.189272  260915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1228 06:56:58.211984  260915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 06:56:58.244360  260915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 06:56:58.268746  260915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1228 06:56:58.287410  260915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1228 06:56:58.315271  260915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 06:56:58.346205  260915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1228 06:56:58.384409  260915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem --> /usr/share/ca-certificates/90762.pem (1708 bytes)
	I1228 06:56:58.419149  260915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 06:56:58.454023  260915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076.pem --> /usr/share/ca-certificates/9076.pem (1338 bytes)
	I1228 06:56:58.476345  260915 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 06:56:58.493349  260915 ssh_runner.go:195] Run: openssl version
	I1228 06:56:58.500769  260915 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/90762.pem
	I1228 06:56:58.510854  260915 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/90762.pem /etc/ssl/certs/90762.pem
	I1228 06:56:58.521404  260915 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/90762.pem
	I1228 06:56:58.526814  260915 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:31 /usr/share/ca-certificates/90762.pem
	I1228 06:56:58.526893  260915 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/90762.pem
	I1228 06:56:58.579536  260915 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 06:56:58.591726  260915 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/90762.pem /etc/ssl/certs/3ec20f2e.0
	I1228 06:56:58.603715  260915 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:58.613518  260915 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 06:56:58.622954  260915 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:58.627431  260915 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:28 /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:58.627487  260915 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:58.687477  260915 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 06:56:58.699073  260915 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1228 06:56:58.710948  260915 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9076.pem
	I1228 06:56:58.722754  260915 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9076.pem /etc/ssl/certs/9076.pem
	I1228 06:56:58.735944  260915 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9076.pem
	I1228 06:56:58.741915  260915 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:31 /usr/share/ca-certificates/9076.pem
	I1228 06:56:58.741988  260915 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9076.pem
	I1228 06:56:58.800642  260915 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 06:56:58.811409  260915 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9076.pem /etc/ssl/certs/51391683.0
	I1228 06:56:58.823986  260915 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 06:56:58.829294  260915 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1228 06:56:58.829413  260915 kubeadm.go:401] StartCluster: {Name:newest-cni-479871 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-479871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:56:58.829571  260915 ssh_runner.go:195] Run: sudo crio config
	I1228 06:56:58.913584  260915 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	W1228 06:56:58.932081  260915 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:58Z" level=error msg="open /run/runc: no such file or directory"
	I1228 06:56:58.932154  260915 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 06:56:58.942180  260915 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1228 06:56:58.953694  260915 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1228 06:56:58.953794  260915 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1228 06:56:58.962855  260915 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1228 06:56:58.962880  260915 kubeadm.go:158] found existing configuration files:
	
	I1228 06:56:58.962926  260915 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1228 06:56:58.972496  260915 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1228 06:56:58.972534  260915 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1228 06:56:58.980676  260915 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1228 06:56:58.991072  260915 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1228 06:56:58.991204  260915 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1228 06:56:58.999651  260915 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1228 06:56:59.008281  260915 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1228 06:56:59.008349  260915 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1228 06:56:59.016399  260915 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1228 06:56:59.024902  260915 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1228 06:56:59.024962  260915 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1228 06:56:59.032507  260915 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1228 06:56:59.203193  260915 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1228 06:56:59.293476  260915 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1228 06:56:59.167161  261568 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1228 06:56:59.167190  261568 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1228 06:56:59.167250  261568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-500581
	I1228 06:56:59.195102  261568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/default-k8s-diff-port-500581/id_rsa Username:docker}
	I1228 06:56:59.209215  261568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/default-k8s-diff-port-500581/id_rsa Username:docker}
	I1228 06:56:59.213140  261568 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1228 06:56:59.213164  261568 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1228 06:56:59.213251  261568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-500581
	I1228 06:56:59.240235  261568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/default-k8s-diff-port-500581/id_rsa Username:docker}
	I1228 06:56:59.296939  261568 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:56:59.314285  261568 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-500581" to be "Ready" ...
	I1228 06:56:59.324792  261568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:56:59.342466  261568 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1228 06:56:59.342613  261568 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1228 06:56:59.359906  261568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1228 06:56:59.364010  261568 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1228 06:56:59.364045  261568 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1228 06:56:59.391472  261568 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1228 06:56:59.391508  261568 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1228 06:56:59.444439  261568 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1228 06:56:59.444465  261568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1228 06:56:59.472399  261568 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1228 06:56:59.472451  261568 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1228 06:56:59.491671  261568 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1228 06:56:59.491775  261568 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1228 06:56:59.516085  261568 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1228 06:56:59.516120  261568 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1228 06:56:59.540413  261568 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1228 06:56:59.540444  261568 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1228 06:56:59.563645  261568 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 06:56:59.563672  261568 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1228 06:56:59.581659  261568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 06:57:00.542003  261568 node_ready.go:49] node "default-k8s-diff-port-500581" is "Ready"
	I1228 06:57:00.542057  261568 node_ready.go:38] duration metric: took 1.227733507s for node "default-k8s-diff-port-500581" to be "Ready" ...
	I1228 06:57:00.542077  261568 api_server.go:52] waiting for apiserver process to appear ...
	I1228 06:57:00.542135  261568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 06:57:01.105548  261568 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.780701527s)
	I1228 06:57:01.105609  261568 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.745665986s)
	I1228 06:57:01.105694  261568 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.523998963s)
	I1228 06:57:01.105746  261568 api_server.go:72] duration metric: took 1.973482037s to wait for apiserver process to appear ...
	I1228 06:57:01.105763  261568 api_server.go:88] waiting for apiserver healthz status ...
	I1228 06:57:01.105885  261568 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1228 06:57:01.107453  261568 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-500581 addons enable metrics-server
	
	I1228 06:57:01.110897  261568 api_server.go:325] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1228 06:57:01.110919  261568 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1228 06:57:01.112410  261568 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1228 06:57:01.113682  261568 addons.go:530] duration metric: took 1.981384906s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1228 06:56:59.097263  260283 addons.go:530] duration metric: took 2.160064919s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1228 06:56:59.581199  260283 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1228 06:56:59.589461  260283 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1228 06:56:59.589517  260283 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1228 06:57:00.081170  260283 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1228 06:57:00.085345  260283 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1228 06:57:00.086368  260283 api_server.go:141] control plane version: v1.35.0
	I1228 06:57:00.086398  260283 api_server.go:131] duration metric: took 1.006128416s to wait for apiserver health ...
	I1228 06:57:00.086409  260283 system_pods.go:43] waiting for kube-system pods to appear ...
	I1228 06:57:00.090076  260283 system_pods.go:59] 8 kube-system pods found
	I1228 06:57:00.090113  260283 system_pods.go:61] "coredns-7d764666f9-dmhdv" [73a84260-cf19-47c9-a23e-616f99cb5f38] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:57:00.090124  260283 system_pods.go:61] "etcd-embed-certs-422591" [fa26dd24-514f-4ada-b710-7e65e52ccd9e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 06:57:00.090138  260283 system_pods.go:61] "kindnet-9zxtp" [e5bd678d-7d09-455f-993e-2fb6a4f02111] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 06:57:00.090151  260283 system_pods.go:61] "kube-apiserver-embed-certs-422591" [935ffe6b-7ba7-4580-b22b-b76cd5469ae8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 06:57:00.090162  260283 system_pods.go:61] "kube-controller-manager-embed-certs-422591" [39f6d823-b0f9-44c0-be26-47e85a60ca63] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 06:57:00.090186  260283 system_pods.go:61] "kube-proxy-j2dkd" [f64a0ce0-4a8f-4d7d-aac7-cce99ec2bdd8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 06:57:00.090199  260283 system_pods.go:61] "kube-scheduler-embed-certs-422591" [ba6d5c13-ea17-4f1d-bcc9-c57e4e7ce995] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 06:57:00.090212  260283 system_pods.go:61] "storage-provisioner" [ac0163fe-8dd0-4650-a401-22a9a9310b5e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:57:00.090223  260283 system_pods.go:74] duration metric: took 3.804246ms to wait for pod list to return data ...
	I1228 06:57:00.090236  260283 default_sa.go:34] waiting for default service account to be created ...
	I1228 06:57:00.092690  260283 default_sa.go:45] found service account: "default"
	I1228 06:57:00.092707  260283 default_sa.go:55] duration metric: took 2.461167ms for default service account to be created ...
	I1228 06:57:00.092720  260283 system_pods.go:116] waiting for k8s-apps to be running ...
	I1228 06:57:00.095179  260283 system_pods.go:86] 8 kube-system pods found
	I1228 06:57:00.095212  260283 system_pods.go:89] "coredns-7d764666f9-dmhdv" [73a84260-cf19-47c9-a23e-616f99cb5f38] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:57:00.095224  260283 system_pods.go:89] "etcd-embed-certs-422591" [fa26dd24-514f-4ada-b710-7e65e52ccd9e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 06:57:00.095245  260283 system_pods.go:89] "kindnet-9zxtp" [e5bd678d-7d09-455f-993e-2fb6a4f02111] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 06:57:00.095258  260283 system_pods.go:89] "kube-apiserver-embed-certs-422591" [935ffe6b-7ba7-4580-b22b-b76cd5469ae8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 06:57:00.095267  260283 system_pods.go:89] "kube-controller-manager-embed-certs-422591" [39f6d823-b0f9-44c0-be26-47e85a60ca63] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 06:57:00.095278  260283 system_pods.go:89] "kube-proxy-j2dkd" [f64a0ce0-4a8f-4d7d-aac7-cce99ec2bdd8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 06:57:00.095286  260283 system_pods.go:89] "kube-scheduler-embed-certs-422591" [ba6d5c13-ea17-4f1d-bcc9-c57e4e7ce995] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 06:57:00.095297  260283 system_pods.go:89] "storage-provisioner" [ac0163fe-8dd0-4650-a401-22a9a9310b5e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:57:00.095307  260283 system_pods.go:126] duration metric: took 2.57702ms to wait for k8s-apps to be running ...
	I1228 06:57:00.095319  260283 system_svc.go:44] waiting for kubelet service to be running ....
	I1228 06:57:00.095369  260283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:57:00.112536  260283 system_svc.go:56] duration metric: took 17.190631ms WaitForService to wait for kubelet
	I1228 06:57:00.112574  260283 kubeadm.go:587] duration metric: took 3.175583293s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 06:57:00.112597  260283 node_conditions.go:102] verifying NodePressure condition ...
	I1228 06:57:00.117248  260283 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1228 06:57:00.117423  260283 node_conditions.go:123] node cpu capacity is 8
	I1228 06:57:00.117486  260283 node_conditions.go:105] duration metric: took 4.86014ms to run NodePressure ...
	I1228 06:57:00.117528  260283 start.go:242] waiting for startup goroutines ...
	I1228 06:57:00.117683  260283 start.go:247] waiting for cluster config update ...
	I1228 06:57:00.117705  260283 start.go:256] writing updated cluster config ...
	I1228 06:57:00.118280  260283 ssh_runner.go:195] Run: rm -f paused
	I1228 06:57:00.124948  260283 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 06:57:00.129371  260283 pod_ready.go:83] waiting for pod "coredns-7d764666f9-dmhdv" in "kube-system" namespace to be "Ready" or be gone ...
	W1228 06:57:02.139240  260283 pod_ready.go:104] pod "coredns-7d764666f9-dmhdv" is not "Ready", error: <nil>
	I1228 06:57:01.606775  261568 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1228 06:57:01.611458  261568 api_server.go:325] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1228 06:57:01.611490  261568 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1228 06:57:02.106018  261568 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1228 06:57:02.112713  261568 api_server.go:325] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1228 06:57:02.114062  261568 api_server.go:141] control plane version: v1.35.0
	I1228 06:57:02.114087  261568 api_server.go:131] duration metric: took 1.008258851s to wait for apiserver health ...
	I1228 06:57:02.114096  261568 system_pods.go:43] waiting for kube-system pods to appear ...
	I1228 06:57:02.118560  261568 system_pods.go:59] 8 kube-system pods found
	I1228 06:57:02.118604  261568 system_pods.go:61] "coredns-7d764666f9-9glh9" [9c46cd6f-643c-4dcd-9ffd-88becb063b24] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:57:02.118620  261568 system_pods.go:61] "etcd-default-k8s-diff-port-500581" [03cb3e5a-cdc5-4c45-983f-05b0f5326abe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 06:57:02.118631  261568 system_pods.go:61] "kindnet-lsrww" [b5bd3c1b-325d-46fe-9378-779822f0ba5b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 06:57:02.118640  261568 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-500581" [8a2ed107-3bb0-4c8c-bdff-d7011ae71058] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 06:57:02.118651  261568 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-500581" [933ddecb-a0d9-44c2-abb5-d7629a8107eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 06:57:02.118660  261568 system_pods.go:61] "kube-proxy-95gmh" [f25e4b21-a201-4838-b7c9-a5fde3304662] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 06:57:02.118668  261568 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-500581" [35caf688-1337-46ff-b025-5d59373ae8e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 06:57:02.118676  261568 system_pods.go:61] "storage-provisioner" [12b3784f-bffe-49c1-8915-2011c07bee4e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:57:02.118685  261568 system_pods.go:74] duration metric: took 4.581477ms to wait for pod list to return data ...
	I1228 06:57:02.118694  261568 default_sa.go:34] waiting for default service account to be created ...
	I1228 06:57:02.122002  261568 default_sa.go:45] found service account: "default"
	I1228 06:57:02.122020  261568 default_sa.go:55] duration metric: took 3.320928ms for default service account to be created ...
	I1228 06:57:02.122039  261568 system_pods.go:116] waiting for k8s-apps to be running ...
	I1228 06:57:02.125517  261568 system_pods.go:86] 8 kube-system pods found
	I1228 06:57:02.125558  261568 system_pods.go:89] "coredns-7d764666f9-9glh9" [9c46cd6f-643c-4dcd-9ffd-88becb063b24] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:57:02.125571  261568 system_pods.go:89] "etcd-default-k8s-diff-port-500581" [03cb3e5a-cdc5-4c45-983f-05b0f5326abe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 06:57:02.125594  261568 system_pods.go:89] "kindnet-lsrww" [b5bd3c1b-325d-46fe-9378-779822f0ba5b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 06:57:02.125607  261568 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-500581" [8a2ed107-3bb0-4c8c-bdff-d7011ae71058] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 06:57:02.125619  261568 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-500581" [933ddecb-a0d9-44c2-abb5-d7629a8107eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 06:57:02.125628  261568 system_pods.go:89] "kube-proxy-95gmh" [f25e4b21-a201-4838-b7c9-a5fde3304662] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 06:57:02.125643  261568 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-500581" [35caf688-1337-46ff-b025-5d59373ae8e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 06:57:02.125650  261568 system_pods.go:89] "storage-provisioner" [12b3784f-bffe-49c1-8915-2011c07bee4e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:57:02.125663  261568 system_pods.go:126] duration metric: took 3.61618ms to wait for k8s-apps to be running ...
	I1228 06:57:02.125675  261568 system_svc.go:44] waiting for kubelet service to be running ....
	I1228 06:57:02.125723  261568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:57:02.146516  261568 system_svc.go:56] duration metric: took 20.829772ms WaitForService to wait for kubelet
	I1228 06:57:02.146548  261568 kubeadm.go:587] duration metric: took 3.014284503s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 06:57:02.146571  261568 node_conditions.go:102] verifying NodePressure condition ...
	I1228 06:57:02.151142  261568 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1228 06:57:02.151173  261568 node_conditions.go:123] node cpu capacity is 8
	I1228 06:57:02.151191  261568 node_conditions.go:105] duration metric: took 4.614814ms to run NodePressure ...
	I1228 06:57:02.151206  261568 start.go:242] waiting for startup goroutines ...
	I1228 06:57:02.151215  261568 start.go:247] waiting for cluster config update ...
	I1228 06:57:02.151228  261568 start.go:256] writing updated cluster config ...
	I1228 06:57:02.151492  261568 ssh_runner.go:195] Run: rm -f paused
	I1228 06:57:02.158502  261568 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 06:57:02.163107  261568 pod_ready.go:83] waiting for pod "coredns-7d764666f9-9glh9" in "kube-system" namespace to be "Ready" or be gone ...
	W1228 06:57:04.168739  261568 pod_ready.go:104] pod "coredns-7d764666f9-9glh9" is not "Ready", error: <nil>
	W1228 06:57:06.170937  261568 pod_ready.go:104] pod "coredns-7d764666f9-9glh9" is not "Ready", error: <nil>
	I1228 06:57:07.273248  260915 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1228 06:57:07.273330  260915 kubeadm.go:319] [preflight] Running pre-flight checks
	I1228 06:57:07.273447  260915 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1228 06:57:07.273543  260915 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1228 06:57:07.273595  260915 kubeadm.go:319] OS: Linux
	I1228 06:57:07.273651  260915 kubeadm.go:319] CGROUPS_CPU: enabled
	I1228 06:57:07.273709  260915 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1228 06:57:07.273771  260915 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1228 06:57:07.273835  260915 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1228 06:57:07.273916  260915 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1228 06:57:07.273992  260915 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1228 06:57:07.274078  260915 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1228 06:57:07.274138  260915 kubeadm.go:319] CGROUPS_IO: enabled
	I1228 06:57:07.274235  260915 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1228 06:57:07.274357  260915 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1228 06:57:07.274477  260915 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1228 06:57:07.274563  260915 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1228 06:57:07.276237  260915 out.go:252]   - Generating certificates and keys ...
	I1228 06:57:07.276338  260915 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1228 06:57:07.276435  260915 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1228 06:57:07.276531  260915 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1228 06:57:07.276613  260915 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1228 06:57:07.276715  260915 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1228 06:57:07.276790  260915 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1228 06:57:07.276871  260915 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1228 06:57:07.277062  260915 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-479871] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1228 06:57:07.277160  260915 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1228 06:57:07.277338  260915 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-479871] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1228 06:57:07.277431  260915 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1228 06:57:07.277519  260915 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1228 06:57:07.277582  260915 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1228 06:57:07.277660  260915 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1228 06:57:07.277726  260915 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1228 06:57:07.277802  260915 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1228 06:57:07.277871  260915 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1228 06:57:07.277975  260915 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1228 06:57:07.278078  260915 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1228 06:57:07.278183  260915 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1228 06:57:07.278271  260915 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1228 06:57:07.279768  260915 out.go:252]   - Booting up control plane ...
	I1228 06:57:07.279971  260915 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1228 06:57:07.280118  260915 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1228 06:57:07.280203  260915 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1228 06:57:07.280341  260915 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1228 06:57:07.280459  260915 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1228 06:57:07.280594  260915 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1228 06:57:07.280705  260915 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1228 06:57:07.280752  260915 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1228 06:57:07.280918  260915 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1228 06:57:07.281066  260915 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1228 06:57:07.281146  260915 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.62379ms
	I1228 06:57:07.281264  260915 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1228 06:57:07.281347  260915 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1228 06:57:07.281414  260915 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1228 06:57:07.281473  260915 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1228 06:57:07.281553  260915 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.006106358s
	I1228 06:57:07.281644  260915 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.100872978s
	I1228 06:57:07.281739  260915 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001834302s
	I1228 06:57:07.281997  260915 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1228 06:57:07.282187  260915 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1228 06:57:07.282270  260915 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1228 06:57:07.282522  260915 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-479871 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1228 06:57:07.282694  260915 kubeadm.go:319] [bootstrap-token] Using token: 1h1kon.f0uwfkf8goxau87f
	I1228 06:57:07.285641  260915 out.go:252]   - Configuring RBAC rules ...
	I1228 06:57:07.285801  260915 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1228 06:57:07.285940  260915 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1228 06:57:07.286155  260915 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1228 06:57:07.286341  260915 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1228 06:57:07.286509  260915 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1228 06:57:07.286626  260915 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1228 06:57:07.286789  260915 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1228 06:57:07.286944  260915 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1228 06:57:07.287022  260915 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1228 06:57:07.287050  260915 kubeadm.go:319] 
	I1228 06:57:07.287134  260915 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1228 06:57:07.287148  260915 kubeadm.go:319] 
	I1228 06:57:07.287240  260915 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1228 06:57:07.287251  260915 kubeadm.go:319] 
	I1228 06:57:07.287284  260915 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1228 06:57:07.287366  260915 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1228 06:57:07.287440  260915 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1228 06:57:07.287451  260915 kubeadm.go:319] 
	I1228 06:57:07.287527  260915 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1228 06:57:07.287537  260915 kubeadm.go:319] 
	I1228 06:57:07.287606  260915 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1228 06:57:07.287615  260915 kubeadm.go:319] 
	I1228 06:57:07.287692  260915 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1228 06:57:07.287797  260915 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1228 06:57:07.287900  260915 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1228 06:57:07.287911  260915 kubeadm.go:319] 
	I1228 06:57:07.288018  260915 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1228 06:57:07.288149  260915 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1228 06:57:07.288163  260915 kubeadm.go:319] 
	I1228 06:57:07.288271  260915 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 1h1kon.f0uwfkf8goxau87f \
	I1228 06:57:07.288398  260915 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6534497fd09654e1c9f62bf7a6763f446292593a08619861d4eab5a65759d2d4 \
	I1228 06:57:07.288433  260915 kubeadm.go:319] 	--control-plane 
	I1228 06:57:07.288450  260915 kubeadm.go:319] 
	I1228 06:57:07.288562  260915 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1228 06:57:07.288578  260915 kubeadm.go:319] 
	I1228 06:57:07.288682  260915 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 1h1kon.f0uwfkf8goxau87f \
	I1228 06:57:07.288837  260915 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6534497fd09654e1c9f62bf7a6763f446292593a08619861d4eab5a65759d2d4 
	I1228 06:57:07.288863  260915 cni.go:84] Creating CNI manager for ""
	I1228 06:57:07.288884  260915 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1228 06:57:07.290538  260915 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1228 06:57:04.636200  260283 pod_ready.go:104] pod "coredns-7d764666f9-dmhdv" is not "Ready", error: <nil>
	W1228 06:57:06.636940  260283 pod_ready.go:104] pod "coredns-7d764666f9-dmhdv" is not "Ready", error: <nil>
	I1228 06:57:07.291873  260915 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1228 06:57:07.298126  260915 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1228 06:57:07.298146  260915 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1228 06:57:07.319436  260915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1228 06:57:07.645417  260915 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1228 06:57:07.645491  260915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-479871 minikube.k8s.io/updated_at=2025_12_28T06_57_07_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba minikube.k8s.io/name=newest-cni-479871 minikube.k8s.io/primary=true
	I1228 06:57:07.645603  260915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:57:07.785117  260915 ops.go:34] apiserver oom_adj: -16
	I1228 06:57:07.785122  260915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:57:08.285590  260915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:57:08.785995  260915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:57:09.285435  260915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:57:09.785188  260915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1228 06:57:08.671402  261568 pod_ready.go:104] pod "coredns-7d764666f9-9glh9" is not "Ready", error: <nil>
	W1228 06:57:10.673458  261568 pod_ready.go:104] pod "coredns-7d764666f9-9glh9" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 28 06:56:37 no-preload-950460 crio[571]: time="2025-12-28T06:56:37.028693176Z" level=info msg="Started container" PID=1789 containerID=d8f9dfcacb0f2e60fa831c073d13a7dbadd88f838736cbc32ec2b4a54d30e949 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-jczrv/dashboard-metrics-scraper id=ce12700f-a916-4cd1-8ecd-5c41ffae1b1d name=/runtime.v1.RuntimeService/StartContainer sandboxID=88359d28d1d76e1447f1a55227926eb5d3a01e03672de29d3e32104f0c3d03f7
	Dec 28 06:56:37 no-preload-950460 crio[571]: time="2025-12-28T06:56:37.064362639Z" level=info msg="Removing container: d7481796a1e2bb45817a2a841b553bd40cd21b1ca6660d824928d49285266ab0" id=e8c804ee-4550-451d-9aaf-64e35618b0de name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 28 06:56:37 no-preload-950460 crio[571]: time="2025-12-28T06:56:37.075019495Z" level=info msg="Removed container d7481796a1e2bb45817a2a841b553bd40cd21b1ca6660d824928d49285266ab0: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-jczrv/dashboard-metrics-scraper" id=e8c804ee-4550-451d-9aaf-64e35618b0de name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 28 06:56:51 no-preload-950460 crio[571]: time="2025-12-28T06:56:51.099821369Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1add2b67-3a0c-4798-85ee-43855598d1a3 name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:56:51 no-preload-950460 crio[571]: time="2025-12-28T06:56:51.100871601Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d3152332-9e75-4036-909d-6f7d6d30c578 name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:56:51 no-preload-950460 crio[571]: time="2025-12-28T06:56:51.102050678Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=fff877e8-ed85-478d-bffc-503b13e7d38b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:56:51 no-preload-950460 crio[571]: time="2025-12-28T06:56:51.102249291Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:56:51 no-preload-950460 crio[571]: time="2025-12-28T06:56:51.108243226Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:56:51 no-preload-950460 crio[571]: time="2025-12-28T06:56:51.108463389Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/19618db850b10b45bb7445aad15c3c9e9de73c483dd3521696d2a542f52b0801/merged/etc/passwd: no such file or directory"
	Dec 28 06:56:51 no-preload-950460 crio[571]: time="2025-12-28T06:56:51.108488516Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/19618db850b10b45bb7445aad15c3c9e9de73c483dd3521696d2a542f52b0801/merged/etc/group: no such file or directory"
	Dec 28 06:56:51 no-preload-950460 crio[571]: time="2025-12-28T06:56:51.108862616Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:56:51 no-preload-950460 crio[571]: time="2025-12-28T06:56:51.142127993Z" level=info msg="Created container 036e9a1dc89d553d170e9953b427bf1650640d11fb1a6f6d38ff5194f571b590: kube-system/storage-provisioner/storage-provisioner" id=fff877e8-ed85-478d-bffc-503b13e7d38b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:56:51 no-preload-950460 crio[571]: time="2025-12-28T06:56:51.143150333Z" level=info msg="Starting container: 036e9a1dc89d553d170e9953b427bf1650640d11fb1a6f6d38ff5194f571b590" id=76c8e9cc-6172-4aa5-8d68-7d1e8a84ba59 name=/runtime.v1.RuntimeService/StartContainer
	Dec 28 06:56:51 no-preload-950460 crio[571]: time="2025-12-28T06:56:51.145290183Z" level=info msg="Started container" PID=1803 containerID=036e9a1dc89d553d170e9953b427bf1650640d11fb1a6f6d38ff5194f571b590 description=kube-system/storage-provisioner/storage-provisioner id=76c8e9cc-6172-4aa5-8d68-7d1e8a84ba59 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e0cc48cb5b0da8b2d24902541cb7597775b8c3fa8a537e72cd8fa2f551d09e42
	Dec 28 06:57:01 no-preload-950460 crio[571]: time="2025-12-28T06:57:01.987764444Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=16833812-0b71-486c-b5de-74ef9427bde2 name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:57:01 no-preload-950460 crio[571]: time="2025-12-28T06:57:01.988770913Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=83d5199d-e724-4ce6-8083-665d98689124 name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:57:01 no-preload-950460 crio[571]: time="2025-12-28T06:57:01.989912278Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-jczrv/dashboard-metrics-scraper" id=59dcbc32-324e-48b2-8472-69e97bb9de03 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:57:01 no-preload-950460 crio[571]: time="2025-12-28T06:57:01.990087101Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:01 no-preload-950460 crio[571]: time="2025-12-28T06:57:01.996819157Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:01 no-preload-950460 crio[571]: time="2025-12-28T06:57:01.997372891Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:02 no-preload-950460 crio[571]: time="2025-12-28T06:57:02.031938473Z" level=info msg="Created container 8ad3c88b5e19f6fe3d04e764d3c8bba33be52d561ef13d761b7665d7aa2eb1e5: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-jczrv/dashboard-metrics-scraper" id=59dcbc32-324e-48b2-8472-69e97bb9de03 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:57:02 no-preload-950460 crio[571]: time="2025-12-28T06:57:02.032673084Z" level=info msg="Starting container: 8ad3c88b5e19f6fe3d04e764d3c8bba33be52d561ef13d761b7665d7aa2eb1e5" id=4cad8df5-0fa3-4f92-8a58-7f8ca93c16ed name=/runtime.v1.RuntimeService/StartContainer
	Dec 28 06:57:02 no-preload-950460 crio[571]: time="2025-12-28T06:57:02.035007261Z" level=info msg="Started container" PID=1842 containerID=8ad3c88b5e19f6fe3d04e764d3c8bba33be52d561ef13d761b7665d7aa2eb1e5 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-jczrv/dashboard-metrics-scraper id=4cad8df5-0fa3-4f92-8a58-7f8ca93c16ed name=/runtime.v1.RuntimeService/StartContainer sandboxID=88359d28d1d76e1447f1a55227926eb5d3a01e03672de29d3e32104f0c3d03f7
	Dec 28 06:57:02 no-preload-950460 crio[571]: time="2025-12-28T06:57:02.141097215Z" level=info msg="Removing container: d8f9dfcacb0f2e60fa831c073d13a7dbadd88f838736cbc32ec2b4a54d30e949" id=cecd6680-be7b-43aa-ab82-609ad7fb3f7b name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 28 06:57:02 no-preload-950460 crio[571]: time="2025-12-28T06:57:02.153441584Z" level=info msg="Removed container d8f9dfcacb0f2e60fa831c073d13a7dbadd88f838736cbc32ec2b4a54d30e949: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-jczrv/dashboard-metrics-scraper" id=cecd6680-be7b-43aa-ab82-609ad7fb3f7b name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	8ad3c88b5e19f       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           10 seconds ago      Exited              dashboard-metrics-scraper   3                   88359d28d1d76       dashboard-metrics-scraper-867fb5f87b-jczrv   kubernetes-dashboard
	036e9a1dc89d5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         2                   e0cc48cb5b0da       storage-provisioner                          kube-system
	d73d54615acbd       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   44 seconds ago      Running             kubernetes-dashboard        0                   1feb5485f9d16       kubernetes-dashboard-b84665fb8-52cwp         kubernetes-dashboard
	cd155e1fe0251       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           52 seconds ago      Running             coredns                     1                   3f136a590397c       coredns-7d764666f9-npk6g                     kube-system
	e43f1bb42e622       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   0c1b1a82aac8a       busybox                                      default
	fd03d8dbcc76e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         1                   e0cc48cb5b0da       storage-provisioner                          kube-system
	2eb68979e3082       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           52 seconds ago      Running             kindnet-cni                 1                   2c18ae3ba3e96       kindnet-xhb7x                                kube-system
	ab6dccc27bbdf       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                           52 seconds ago      Running             kube-proxy                  1                   e94a130d37e4c       kube-proxy-294rn                             kube-system
	bb07d52b1828d       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                           55 seconds ago      Running             kube-controller-manager     1                   90785ca266249       kube-controller-manager-no-preload-950460    kube-system
	284f317ab1ebb       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                           55 seconds ago      Running             kube-scheduler              1                   d3d8d62c1e4f9       kube-scheduler-no-preload-950460             kube-system
	335d2285b48ba       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           55 seconds ago      Running             etcd                        1                   2d1772c838bf1       etcd-no-preload-950460                       kube-system
	a6b593db539dd       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                           55 seconds ago      Running             kube-apiserver              1                   fdd06bdaac050       kube-apiserver-no-preload-950460             kube-system
	
	
	==> describe nodes <==
	Name:               no-preload-950460
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-950460
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba
	                    minikube.k8s.io/name=no-preload-950460
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_28T06_55_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Dec 2025 06:55:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-950460
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 28 Dec 2025 06:57:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 28 Dec 2025 06:56:50 +0000   Sun, 28 Dec 2025 06:55:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 28 Dec 2025 06:56:50 +0000   Sun, 28 Dec 2025 06:55:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 28 Dec 2025 06:56:50 +0000   Sun, 28 Dec 2025 06:55:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 28 Dec 2025 06:56:50 +0000   Sun, 28 Dec 2025 06:56:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-950460
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 493159aea3d8b8768b108b926950835d
	  System UUID:                89ca7428-7fe3-48bf-8e6c-c80da5b6d3a1
	  Boot ID:                    e7a1d175-ccf2-4135-b9c7-3a9f70f4c4af
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-7d764666f9-npk6g                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     104s
	  kube-system                 etcd-no-preload-950460                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         110s
	  kube-system                 kindnet-xhb7x                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-no-preload-950460              250m (3%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-no-preload-950460     200m (2%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-proxy-294rn                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-no-preload-950460              100m (1%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-jczrv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-52cwp          0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  105s  node-controller  Node no-preload-950460 event: Registered Node no-preload-950460 in Controller
	  Normal  RegisteredNode  50s   node-controller  Node no-preload-950460 event: Registered Node no-preload-950460 in Controller
	
	
	==> dmesg <==
	[Dec28 06:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001811] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000998] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.386099] i8042: Warning: Keylock active
	[  +0.010472] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.485785] block sda: the capability attribute has been deprecated.
	[  +0.082391] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024584] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.071522] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 06:57:12 up 39 min,  0 user,  load average: 5.29, 3.17, 1.92
	Linux no-preload-950460 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 28 06:56:34 no-preload-950460 kubelet[726]: E1228 06:56:34.053787     726 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-950460" containerName="kube-apiserver"
	Dec 28 06:56:35 no-preload-950460 kubelet[726]: E1228 06:56:35.815700     726 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-950460" containerName="kube-controller-manager"
	Dec 28 06:56:36 no-preload-950460 kubelet[726]: E1228 06:56:36.383403     726 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-950460" containerName="etcd"
	Dec 28 06:56:36 no-preload-950460 kubelet[726]: E1228 06:56:36.987280     726 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-jczrv" containerName="dashboard-metrics-scraper"
	Dec 28 06:56:36 no-preload-950460 kubelet[726]: I1228 06:56:36.987321     726 scope.go:122] "RemoveContainer" containerID="d7481796a1e2bb45817a2a841b553bd40cd21b1ca6660d824928d49285266ab0"
	Dec 28 06:56:37 no-preload-950460 kubelet[726]: I1228 06:56:37.063107     726 scope.go:122] "RemoveContainer" containerID="d7481796a1e2bb45817a2a841b553bd40cd21b1ca6660d824928d49285266ab0"
	Dec 28 06:56:37 no-preload-950460 kubelet[726]: E1228 06:56:37.063235     726 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-950460" containerName="etcd"
	Dec 28 06:56:37 no-preload-950460 kubelet[726]: E1228 06:56:37.063386     726 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-jczrv" containerName="dashboard-metrics-scraper"
	Dec 28 06:56:37 no-preload-950460 kubelet[726]: I1228 06:56:37.063411     726 scope.go:122] "RemoveContainer" containerID="d8f9dfcacb0f2e60fa831c073d13a7dbadd88f838736cbc32ec2b4a54d30e949"
	Dec 28 06:56:37 no-preload-950460 kubelet[726]: E1228 06:56:37.063562     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-jczrv_kubernetes-dashboard(a3a8f763-f065-44a8-8a5d-07cfd1073277)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-jczrv" podUID="a3a8f763-f065-44a8-8a5d-07cfd1073277"
	Dec 28 06:56:40 no-preload-950460 kubelet[726]: E1228 06:56:40.514234     726 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-jczrv" containerName="dashboard-metrics-scraper"
	Dec 28 06:56:40 no-preload-950460 kubelet[726]: I1228 06:56:40.514271     726 scope.go:122] "RemoveContainer" containerID="d8f9dfcacb0f2e60fa831c073d13a7dbadd88f838736cbc32ec2b4a54d30e949"
	Dec 28 06:56:40 no-preload-950460 kubelet[726]: E1228 06:56:40.514417     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-jczrv_kubernetes-dashboard(a3a8f763-f065-44a8-8a5d-07cfd1073277)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-jczrv" podUID="a3a8f763-f065-44a8-8a5d-07cfd1073277"
	Dec 28 06:56:51 no-preload-950460 kubelet[726]: I1228 06:56:51.099372     726 scope.go:122] "RemoveContainer" containerID="fd03d8dbcc76e4097ae1b7d2537ef7ada5f92d3166384ba71570161b37557929"
	Dec 28 06:56:55 no-preload-950460 kubelet[726]: E1228 06:56:55.678874     726 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-npk6g" containerName="coredns"
	Dec 28 06:57:01 no-preload-950460 kubelet[726]: E1228 06:57:01.987214     726 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-jczrv" containerName="dashboard-metrics-scraper"
	Dec 28 06:57:01 no-preload-950460 kubelet[726]: I1228 06:57:01.987252     726 scope.go:122] "RemoveContainer" containerID="d8f9dfcacb0f2e60fa831c073d13a7dbadd88f838736cbc32ec2b4a54d30e949"
	Dec 28 06:57:02 no-preload-950460 kubelet[726]: I1228 06:57:02.139116     726 scope.go:122] "RemoveContainer" containerID="d8f9dfcacb0f2e60fa831c073d13a7dbadd88f838736cbc32ec2b4a54d30e949"
	Dec 28 06:57:02 no-preload-950460 kubelet[726]: E1228 06:57:02.139482     726 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-jczrv" containerName="dashboard-metrics-scraper"
	Dec 28 06:57:02 no-preload-950460 kubelet[726]: I1228 06:57:02.139510     726 scope.go:122] "RemoveContainer" containerID="8ad3c88b5e19f6fe3d04e764d3c8bba33be52d561ef13d761b7665d7aa2eb1e5"
	Dec 28 06:57:02 no-preload-950460 kubelet[726]: E1228 06:57:02.139690     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-jczrv_kubernetes-dashboard(a3a8f763-f065-44a8-8a5d-07cfd1073277)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-jczrv" podUID="a3a8f763-f065-44a8-8a5d-07cfd1073277"
	Dec 28 06:57:09 no-preload-950460 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 28 06:57:10 no-preload-950460 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 28 06:57:10 no-preload-950460 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 06:57:10 no-preload-950460 systemd[1]: kubelet.service: Consumed 1.696s CPU time.
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1228 06:57:12.025321  268172 logs.go:279] Failed to list containers for "kube-apiserver": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:12Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:12.121232  268172 logs.go:279] Failed to list containers for "etcd": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:12Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:12.210158  268172 logs.go:279] Failed to list containers for "coredns": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:12Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:12.310957  268172 logs.go:279] Failed to list containers for "kube-scheduler": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:12Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:12.387977  268172 logs.go:279] Failed to list containers for "kube-proxy": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:12Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:12.461798  268172 logs.go:279] Failed to list containers for "kube-controller-manager": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:12Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:12.529589  268172 logs.go:279] Failed to list containers for "kindnet": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:12Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:12.610497  268172 logs.go:279] Failed to list containers for "storage-provisioner": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:12Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:12.698565  268172 logs.go:279] Failed to list containers for "kubernetes-dashboard": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:12Z" level=error msg="open /run/runc: no such file or directory"

                                                
                                                
** /stderr **
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-950460 -n no-preload-950460
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-950460 -n no-preload-950460: exit status 2 (421.596166ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-950460 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-950460
helpers_test.go:244: (dbg) docker inspect no-preload-950460:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7db017036a6f30a171f925d59009395ab52e0e628d6007614a4cc984fdf39137",
	        "Created": "2025-12-28T06:55:00.893625015Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 252535,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-28T06:56:09.982557423Z",
	            "FinishedAt": "2025-12-28T06:56:08.019682014Z"
	        },
	        "Image": "sha256:8b8cccb9afb2a57c3d011fcf33e0403b1551aa7036e30b12a395646869801935",
	        "ResolvConfPath": "/var/lib/docker/containers/7db017036a6f30a171f925d59009395ab52e0e628d6007614a4cc984fdf39137/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7db017036a6f30a171f925d59009395ab52e0e628d6007614a4cc984fdf39137/hostname",
	        "HostsPath": "/var/lib/docker/containers/7db017036a6f30a171f925d59009395ab52e0e628d6007614a4cc984fdf39137/hosts",
	        "LogPath": "/var/lib/docker/containers/7db017036a6f30a171f925d59009395ab52e0e628d6007614a4cc984fdf39137/7db017036a6f30a171f925d59009395ab52e0e628d6007614a4cc984fdf39137-json.log",
	        "Name": "/no-preload-950460",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-950460:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-950460",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7db017036a6f30a171f925d59009395ab52e0e628d6007614a4cc984fdf39137",
	                "LowerDir": "/var/lib/docker/overlay2/054301f245be985309742daf824fbdce12364ee376445d3bf62cf3ee351edbca-init/diff:/var/lib/docker/overlay2/69e554713d6cc3cb33e7ea5f93430536a8ca0db38320574d3719c26f00b2f62c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/054301f245be985309742daf824fbdce12364ee376445d3bf62cf3ee351edbca/merged",
	                "UpperDir": "/var/lib/docker/overlay2/054301f245be985309742daf824fbdce12364ee376445d3bf62cf3ee351edbca/diff",
	                "WorkDir": "/var/lib/docker/overlay2/054301f245be985309742daf824fbdce12364ee376445d3bf62cf3ee351edbca/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-950460",
	                "Source": "/var/lib/docker/volumes/no-preload-950460/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-950460",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-950460",
	                "name.minikube.sigs.k8s.io": "no-preload-950460",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "f4fcd0aa467545a18fbda5e7e520616ea83d2e9b4f2c45d5573f4c9b0e4b1362",
	            "SandboxKey": "/var/run/docker/netns/f4fcd0aa4675",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-950460": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "17f0dbca0a318f9427a146748bffe1e85820955f787d11210b299ebcf405441e",
	                    "EndpointID": "87c66e7af727b8b5210d0bc65bb4faef63cfac5d88ac700fba4088acaa66c4bc",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "36:2b:81:9b:ac:c6",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-950460",
	                        "7db017036a6f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-950460 -n no-preload-950460
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-950460 -n no-preload-950460: exit status 2 (360.96784ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-950460 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-950460 logs -n 25: (1.202260121s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p test-preload-785573                                                                                                                                                                                                                        │ test-preload-785573          │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ start   │ -p embed-certs-422591 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:56 UTC │
	│ delete  │ -p stopped-upgrade-416029                                                                                                                                                                                                                     │ stopped-upgrade-416029       │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ delete  │ -p disable-driver-mounts-719168                                                                                                                                                                                                               │ disable-driver-mounts-719168 │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ start   │ -p default-k8s-diff-port-500581 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:56 UTC │
	│ addons  │ enable metrics-server -p no-preload-950460 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │                     │
	│ stop    │ -p no-preload-950460 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:56 UTC │
	│ addons  │ enable dashboard -p no-preload-950460 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ start   │ -p no-preload-950460 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ addons  │ enable metrics-server -p embed-certs-422591 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	│ stop    │ -p embed-certs-422591 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-500581 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-500581 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ image   │ old-k8s-version-694122 image list --format=json                                                                                                                                                                                               │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ pause   │ -p old-k8s-version-694122 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	│ delete  │ -p old-k8s-version-694122                                                                                                                                                                                                                     │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ addons  │ enable dashboard -p embed-certs-422591 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ start   │ -p embed-certs-422591 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	│ delete  │ -p old-k8s-version-694122                                                                                                                                                                                                                     │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ start   │ -p newest-cni-479871 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:57 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-500581 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ start   │ -p default-k8s-diff-port-500581 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	│ image   │ no-preload-950460 image list --format=json                                                                                                                                                                                                    │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ pause   │ -p no-preload-950460 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-479871 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 06:56:51
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 06:56:51.304822  261568 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:56:51.304949  261568 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:56:51.304962  261568 out.go:374] Setting ErrFile to fd 2...
	I1228 06:56:51.304969  261568 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:56:51.305236  261568 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:56:51.305658  261568 out.go:368] Setting JSON to false
	I1228 06:56:51.306949  261568 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2363,"bootTime":1766902648,"procs":474,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1228 06:56:51.306998  261568 start.go:143] virtualization: kvm guest
	I1228 06:56:51.312562  261568 out.go:179] * [default-k8s-diff-port-500581] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1228 06:56:51.313893  261568 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 06:56:51.313933  261568 notify.go:221] Checking for updates...
	I1228 06:56:51.316760  261568 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 06:56:51.318014  261568 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:56:51.322529  261568 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-5550/.minikube
	I1228 06:56:51.323905  261568 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1228 06:56:51.325197  261568 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 06:56:51.326905  261568 config.go:182] Loaded profile config "default-k8s-diff-port-500581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:56:51.327673  261568 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 06:56:51.352695  261568 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1228 06:56:51.352843  261568 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:56:51.414000  261568 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:83 SystemTime:2025-12-28 06:56:51.40353353 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:56:51.414142  261568 docker.go:319] overlay module found
	I1228 06:56:51.418800  261568 out.go:179] * Using the docker driver based on existing profile
	I1228 06:56:51.419979  261568 start.go:309] selected driver: docker
	I1228 06:56:51.419992  261568 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-500581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-500581 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:56:51.420098  261568 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 06:56:51.420695  261568 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:56:51.478184  261568 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:83 SystemTime:2025-12-28 06:56:51.468547864 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:56:51.478493  261568 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 06:56:51.478528  261568 cni.go:84] Creating CNI manager for ""
	I1228 06:56:51.478601  261568 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1228 06:56:51.478656  261568 start.go:353] cluster config:
	{Name:default-k8s-diff-port-500581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-500581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:56:51.480689  261568 out.go:179] * Starting "default-k8s-diff-port-500581" primary control-plane node in "default-k8s-diff-port-500581" cluster
	I1228 06:56:51.482007  261568 cache.go:134] Beginning downloading kic base image for docker with crio
	I1228 06:56:51.483353  261568 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 06:56:51.484469  261568 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1228 06:56:51.484517  261568 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22352-5550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1228 06:56:51.484526  261568 cache.go:65] Caching tarball of preloaded images
	I1228 06:56:51.484594  261568 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 06:56:51.484617  261568 preload.go:251] Found /home/jenkins/minikube-integration/22352-5550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1228 06:56:51.484731  261568 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1228 06:56:51.484886  261568 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/default-k8s-diff-port-500581/config.json ...
	I1228 06:56:51.507639  261568 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 06:56:51.507662  261568 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
	I1228 06:56:51.507678  261568 cache.go:243] Successfully downloaded all kic artifacts
	I1228 06:56:51.507716  261568 start.go:360] acquireMachinesLock for default-k8s-diff-port-500581: {Name:mk09ab6a942c8bf16d457c533e6be9200b317247 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:56:51.507793  261568 start.go:364] duration metric: took 42.618µs to acquireMachinesLock for "default-k8s-diff-port-500581"
	I1228 06:56:51.507811  261568 start.go:96] Skipping create...Using existing machine configuration
	I1228 06:56:51.507818  261568 fix.go:54] fixHost starting: 
	I1228 06:56:51.508017  261568 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-500581 --format={{.State.Status}}
	I1228 06:56:51.526407  261568 fix.go:112] recreateIfNeeded on default-k8s-diff-port-500581: state=Stopped err=<nil>
	W1228 06:56:51.526437  261568 fix.go:138] unexpected machine state, will restart: <nil>
	I1228 06:56:49.299782  260283 out.go:252] * Restarting existing docker container for "embed-certs-422591" ...
	I1228 06:56:49.299856  260283 cli_runner.go:164] Run: docker start embed-certs-422591
	I1228 06:56:50.029376  260283 cli_runner.go:164] Run: docker container inspect embed-certs-422591 --format={{.State.Status}}
	I1228 06:56:50.048972  260283 kic.go:430] container "embed-certs-422591" state is running.
	I1228 06:56:50.049416  260283 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-422591
	I1228 06:56:50.070752  260283 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/embed-certs-422591/config.json ...
	I1228 06:56:50.070988  260283 machine.go:94] provisionDockerMachine start ...
	I1228 06:56:50.071086  260283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422591
	I1228 06:56:50.094281  260283 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:50.094592  260283 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1228 06:56:50.094614  260283 main.go:144] libmachine: About to run SSH command:
	hostname
	I1228 06:56:50.095430  260283 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:32768->127.0.0.1:33083: read: connection reset by peer
	I1228 06:56:53.224998  260283 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-422591
	
	I1228 06:56:53.225041  260283 ubuntu.go:182] provisioning hostname "embed-certs-422591"
	I1228 06:56:53.225100  260283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422591
	I1228 06:56:53.244551  260283 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:53.244828  260283 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1228 06:56:53.244846  260283 main.go:144] libmachine: About to run SSH command:
	sudo hostname embed-certs-422591 && echo "embed-certs-422591" | sudo tee /etc/hostname
	I1228 06:56:53.389453  260283 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-422591
	
	I1228 06:56:53.389539  260283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422591
	I1228 06:56:53.409408  260283 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:53.409692  260283 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1228 06:56:53.409717  260283 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-422591' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-422591/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-422591' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1228 06:56:53.535649  260283 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1228 06:56:53.535685  260283 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22352-5550/.minikube CaCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22352-5550/.minikube}
	I1228 06:56:53.535733  260283 ubuntu.go:190] setting up certificates
	I1228 06:56:53.535752  260283 provision.go:84] configureAuth start
	I1228 06:56:53.535838  260283 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-422591
	I1228 06:56:53.554332  260283 provision.go:143] copyHostCerts
	I1228 06:56:53.554402  260283 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem, removing ...
	I1228 06:56:53.554423  260283 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem
	I1228 06:56:53.554514  260283 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem (1082 bytes)
	I1228 06:56:53.554657  260283 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem, removing ...
	I1228 06:56:53.554671  260283 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem
	I1228 06:56:53.554718  260283 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem (1123 bytes)
	I1228 06:56:53.554817  260283 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem, removing ...
	I1228 06:56:53.554834  260283 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem
	I1228 06:56:53.554898  260283 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem (1679 bytes)
	I1228 06:56:53.554996  260283 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem org=jenkins.embed-certs-422591 san=[127.0.0.1 192.168.76.2 embed-certs-422591 localhost minikube]
	I1228 06:56:53.616863  260283 provision.go:177] copyRemoteCerts
	I1228 06:56:53.616949  260283 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1228 06:56:53.616995  260283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422591
	I1228 06:56:53.635721  260283 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/embed-certs-422591/id_rsa Username:docker}
	I1228 06:56:53.727300  260283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1228 06:56:53.745199  260283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1228 06:56:53.763059  260283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1228 06:56:53.779536  260283 provision.go:87] duration metric: took 243.761087ms to configureAuth
	I1228 06:56:53.779563  260283 ubuntu.go:206] setting minikube options for container-runtime
	I1228 06:56:53.779720  260283 config.go:182] Loaded profile config "embed-certs-422591": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:56:53.779833  260283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422591
	I1228 06:56:53.797684  260283 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:53.797962  260283 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1228 06:56:53.797993  260283 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1228 06:56:51.187721  252331 pod_ready.go:104] pod "coredns-7d764666f9-npk6g" is not "Ready", error: <nil>
	W1228 06:56:53.686879  252331 pod_ready.go:104] pod "coredns-7d764666f9-npk6g" is not "Ready", error: <nil>
	I1228 06:56:50.480049  260915 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1228 06:56:50.480300  260915 start.go:159] libmachine.API.Create for "newest-cni-479871" (driver="docker")
	I1228 06:56:50.480357  260915 client.go:173] LocalClient.Create starting
	I1228 06:56:50.480438  260915 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem
	I1228 06:56:50.480482  260915 main.go:144] libmachine: Decoding PEM data...
	I1228 06:56:50.480504  260915 main.go:144] libmachine: Parsing certificate...
	I1228 06:56:50.480573  260915 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem
	I1228 06:56:50.480601  260915 main.go:144] libmachine: Decoding PEM data...
	I1228 06:56:50.480625  260915 main.go:144] libmachine: Parsing certificate...
	I1228 06:56:50.481050  260915 cli_runner.go:164] Run: docker network inspect newest-cni-479871 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1228 06:56:50.497636  260915 cli_runner.go:211] docker network inspect newest-cni-479871 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1228 06:56:50.497706  260915 network_create.go:284] running [docker network inspect newest-cni-479871] to gather additional debugging logs...
	I1228 06:56:50.497723  260915 cli_runner.go:164] Run: docker network inspect newest-cni-479871
	W1228 06:56:50.516872  260915 cli_runner.go:211] docker network inspect newest-cni-479871 returned with exit code 1
	I1228 06:56:50.516901  260915 network_create.go:287] error running [docker network inspect newest-cni-479871]: docker network inspect newest-cni-479871: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-479871 not found
	I1228 06:56:50.516925  260915 network_create.go:289] output of [docker network inspect newest-cni-479871]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-479871 not found
	
	** /stderr **
	I1228 06:56:50.517047  260915 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 06:56:50.535337  260915 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-83d3c063481b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ca:56:51:df:60:88} reservation:<nil>}
	I1228 06:56:50.536022  260915 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-94477def059b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:5a:82:84:46:ba:6c} reservation:<nil>}
	I1228 06:56:50.536725  260915 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-76f4b09d664b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9a:e7:39:af:62:68} reservation:<nil>}
	I1228 06:56:50.537233  260915 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4435fbd1d5af IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:56:c5:3b:23:f3:bc} reservation:<nil>}
	I1228 06:56:50.538018  260915 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ed5f10}
	I1228 06:56:50.538069  260915 network_create.go:124] attempt to create docker network newest-cni-479871 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1228 06:56:50.538139  260915 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-479871 newest-cni-479871
	I1228 06:56:50.590599  260915 network_create.go:108] docker network newest-cni-479871 192.168.85.0/24 created
	I1228 06:56:50.590626  260915 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-479871" container
	I1228 06:56:50.590684  260915 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1228 06:56:50.612756  260915 cli_runner.go:164] Run: docker volume create newest-cni-479871 --label name.minikube.sigs.k8s.io=newest-cni-479871 --label created_by.minikube.sigs.k8s.io=true
	I1228 06:56:50.632558  260915 oci.go:103] Successfully created a docker volume newest-cni-479871
	I1228 06:56:50.632647  260915 cli_runner.go:164] Run: docker run --rm --name newest-cni-479871-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-479871 --entrypoint /usr/bin/test -v newest-cni-479871:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -d /var/lib
	I1228 06:56:51.057547  260915 oci.go:107] Successfully prepared a docker volume newest-cni-479871
	I1228 06:56:51.057623  260915 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1228 06:56:51.057634  260915 kic.go:194] Starting extracting preloaded images to volume ...
	I1228 06:56:51.057688  260915 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22352-5550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-479871:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1228 06:56:54.002932  260915 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22352-5550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-479871:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -I lz4 -xf /preloaded.tar -C /extractDir: (2.945200662s)
	I1228 06:56:54.002968  260915 kic.go:203] duration metric: took 2.94532948s to extract preloaded images to volume ...
	W1228 06:56:54.003085  260915 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1228 06:56:54.003131  260915 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1228 06:56:54.003194  260915 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1228 06:56:54.071814  260915 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-479871 --name newest-cni-479871 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-479871 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-479871 --network newest-cni-479871 --ip 192.168.85.2 --volume newest-cni-479871:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1
	I1228 06:56:54.369279  260915 cli_runner.go:164] Run: docker container inspect newest-cni-479871 --format={{.State.Running}}
	I1228 06:56:54.388635  260915 cli_runner.go:164] Run: docker container inspect newest-cni-479871 --format={{.State.Status}}
	I1228 06:56:54.408312  260915 cli_runner.go:164] Run: docker exec newest-cni-479871 stat /var/lib/dpkg/alternatives/iptables
	I1228 06:56:54.458080  260915 oci.go:144] the created container "newest-cni-479871" has a running status.
	I1228 06:56:54.458112  260915 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22352-5550/.minikube/machines/newest-cni-479871/id_rsa...
	I1228 06:56:54.551688  260915 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22352-5550/.minikube/machines/newest-cni-479871/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1228 06:56:54.583285  260915 cli_runner.go:164] Run: docker container inspect newest-cni-479871 --format={{.State.Status}}
	I1228 06:56:54.607350  260915 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1228 06:56:54.607368  260915 kic_runner.go:114] Args: [docker exec --privileged newest-cni-479871 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1228 06:56:54.652142  260915 cli_runner.go:164] Run: docker container inspect newest-cni-479871 --format={{.State.Status}}
	I1228 06:56:54.681007  260915 machine.go:94] provisionDockerMachine start ...
	I1228 06:56:54.681235  260915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:56:54.705265  260915 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:54.705490  260915 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1228 06:56:54.705498  260915 main.go:144] libmachine: About to run SSH command:
	hostname
	I1228 06:56:54.841048  260915 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-479871
	
	I1228 06:56:54.841091  260915 ubuntu.go:182] provisioning hostname "newest-cni-479871"
	I1228 06:56:54.841152  260915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:56:54.860627  260915 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:54.860944  260915 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1228 06:56:54.860965  260915 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-479871 && echo "newest-cni-479871" | sudo tee /etc/hostname
	I1228 06:56:55.000794  260915 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-479871
	
	I1228 06:56:55.000873  260915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:56:55.023082  260915 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:55.023416  260915 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1228 06:56:55.023451  260915 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-479871' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-479871/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-479871' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1228 06:56:55.155462  260915 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1228 06:56:55.155487  260915 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22352-5550/.minikube CaCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22352-5550/.minikube}
	I1228 06:56:55.155505  260915 ubuntu.go:190] setting up certificates
	I1228 06:56:55.155516  260915 provision.go:84] configureAuth start
	I1228 06:56:55.155581  260915 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-479871
	I1228 06:56:55.175395  260915 provision.go:143] copyHostCerts
	I1228 06:56:55.175450  260915 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem, removing ...
	I1228 06:56:55.175460  260915 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem
	I1228 06:56:55.175531  260915 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem (1123 bytes)
	I1228 06:56:55.175657  260915 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem, removing ...
	I1228 06:56:55.175670  260915 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem
	I1228 06:56:55.175711  260915 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem (1679 bytes)
	I1228 06:56:55.175807  260915 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem, removing ...
	I1228 06:56:55.175819  260915 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem
	I1228 06:56:55.175860  260915 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem (1082 bytes)
	I1228 06:56:55.175997  260915 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem org=jenkins.newest-cni-479871 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-479871]
	I1228 06:56:55.234134  260915 provision.go:177] copyRemoteCerts
	I1228 06:56:55.234200  260915 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1228 06:56:55.234257  260915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:56:55.253397  260915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/newest-cni-479871/id_rsa Username:docker}
	I1228 06:56:54.168584  260283 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1228 06:56:54.168615  260283 machine.go:97] duration metric: took 4.09761028s to provisionDockerMachine
	I1228 06:56:54.168631  260283 start.go:293] postStartSetup for "embed-certs-422591" (driver="docker")
	I1228 06:56:54.168660  260283 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1228 06:56:54.168725  260283 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1228 06:56:54.168787  260283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422591
	I1228 06:56:54.192016  260283 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/embed-certs-422591/id_rsa Username:docker}
	I1228 06:56:54.304369  260283 ssh_runner.go:195] Run: cat /etc/os-release
	I1228 06:56:54.308295  260283 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1228 06:56:54.308330  260283 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1228 06:56:54.308342  260283 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-5550/.minikube/addons for local assets ...
	I1228 06:56:54.308408  260283 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-5550/.minikube/files for local assets ...
	I1228 06:56:54.308518  260283 filesync.go:149] local asset: /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem -> 90762.pem in /etc/ssl/certs
	I1228 06:56:54.308669  260283 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1228 06:56:54.316305  260283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem --> /etc/ssl/certs/90762.pem (1708 bytes)
	I1228 06:56:54.333546  260283 start.go:296] duration metric: took 164.900492ms for postStartSetup
	I1228 06:56:54.333638  260283 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 06:56:54.333685  260283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422591
	I1228 06:56:54.354220  260283 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/embed-certs-422591/id_rsa Username:docker}
	I1228 06:56:54.444937  260283 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1228 06:56:54.451873  260283 fix.go:56] duration metric: took 5.175283325s for fixHost
	I1228 06:56:54.451930  260283 start.go:83] releasing machines lock for "embed-certs-422591", held for 5.17534762s
	I1228 06:56:54.452000  260283 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-422591
	I1228 06:56:54.471600  260283 ssh_runner.go:195] Run: cat /version.json
	I1228 06:56:54.471642  260283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422591
	I1228 06:56:54.471728  260283 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1228 06:56:54.471811  260283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422591
	I1228 06:56:54.492447  260283 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/embed-certs-422591/id_rsa Username:docker}
	I1228 06:56:54.492692  260283 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/embed-certs-422591/id_rsa Username:docker}
	I1228 06:56:54.656519  260283 ssh_runner.go:195] Run: systemctl --version
	I1228 06:56:54.666648  260283 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1228 06:56:54.712845  260283 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1228 06:56:54.719909  260283 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1228 06:56:54.719980  260283 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1228 06:56:54.729922  260283 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1228 06:56:54.729983  260283 start.go:496] detecting cgroup driver to use...
	I1228 06:56:54.730019  260283 detect.go:190] detected "systemd" cgroup driver on host os
	I1228 06:56:54.730084  260283 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1228 06:56:54.745512  260283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1228 06:56:54.760533  260283 docker.go:218] disabling cri-docker service (if available) ...
	I1228 06:56:54.760588  260283 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1228 06:56:54.776631  260283 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1228 06:56:54.789719  260283 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1228 06:56:54.887189  260283 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1228 06:56:54.981826  260283 docker.go:234] disabling docker service ...
	I1228 06:56:54.981900  260283 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1228 06:56:55.001365  260283 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1228 06:56:55.016902  260283 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1228 06:56:55.113674  260283 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1228 06:56:55.201172  260283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1228 06:56:55.213948  260283 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 06:56:55.229743  260283 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1228 06:56:55.229795  260283 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:55.238954  260283 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1228 06:56:55.239021  260283 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:55.248040  260283 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:55.257595  260283 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:55.266670  260283 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1228 06:56:55.275055  260283 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:55.284080  260283 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:55.292518  260283 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:55.301093  260283 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1228 06:56:55.308817  260283 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1228 06:56:55.316372  260283 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:56:55.403600  260283 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1228 06:56:55.536797  260283 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1228 06:56:55.536860  260283 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1228 06:56:55.541349  260283 start.go:574] Will wait 60s for crictl version
	I1228 06:56:55.541437  260283 ssh_runner.go:195] Run: which crictl
	I1228 06:56:55.544932  260283 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1228 06:56:55.573996  260283 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1228 06:56:55.574084  260283 ssh_runner.go:195] Run: crio --version
	I1228 06:56:55.603216  260283 ssh_runner.go:195] Run: crio --version
	I1228 06:56:55.635699  260283 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1228 06:56:51.528193  261568 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-500581" ...
	I1228 06:56:51.528256  261568 cli_runner.go:164] Run: docker start default-k8s-diff-port-500581
	I1228 06:56:51.794281  261568 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-500581 --format={{.State.Status}}
	I1228 06:56:51.813604  261568 kic.go:430] container "default-k8s-diff-port-500581" state is running.
	I1228 06:56:51.813999  261568 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-500581
	I1228 06:56:51.836391  261568 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/default-k8s-diff-port-500581/config.json ...
	I1228 06:56:51.836675  261568 machine.go:94] provisionDockerMachine start ...
	I1228 06:56:51.836769  261568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-500581
	I1228 06:56:51.856837  261568 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:51.857168  261568 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1228 06:56:51.857185  261568 main.go:144] libmachine: About to run SSH command:
	hostname
	I1228 06:56:51.857850  261568 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56468->127.0.0.1:33088: read: connection reset by peer
	I1228 06:56:54.989220  261568 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-500581
	
	I1228 06:56:54.989252  261568 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-500581"
	I1228 06:56:54.989314  261568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-500581
	I1228 06:56:55.011189  261568 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:55.011424  261568 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1228 06:56:55.011443  261568 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-500581 && echo "default-k8s-diff-port-500581" | sudo tee /etc/hostname
	I1228 06:56:55.160703  261568 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-500581
	
	I1228 06:56:55.160788  261568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-500581
	I1228 06:56:55.180898  261568 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:55.181227  261568 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1228 06:56:55.181257  261568 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-500581' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-500581/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-500581' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1228 06:56:55.307110  261568 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1228 06:56:55.307133  261568 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22352-5550/.minikube CaCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22352-5550/.minikube}
	I1228 06:56:55.307155  261568 ubuntu.go:190] setting up certificates
	I1228 06:56:55.307172  261568 provision.go:84] configureAuth start
	I1228 06:56:55.307219  261568 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-500581
	I1228 06:56:55.326689  261568 provision.go:143] copyHostCerts
	I1228 06:56:55.326750  261568 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem, removing ...
	I1228 06:56:55.326761  261568 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem
	I1228 06:56:55.326811  261568 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem (1123 bytes)
	I1228 06:56:55.326966  261568 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem, removing ...
	I1228 06:56:55.326979  261568 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem
	I1228 06:56:55.327002  261568 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem (1679 bytes)
	I1228 06:56:55.327100  261568 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem, removing ...
	I1228 06:56:55.327110  261568 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem
	I1228 06:56:55.327132  261568 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem (1082 bytes)
	I1228 06:56:55.327202  261568 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-500581 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-500581 localhost minikube]
	I1228 06:56:55.373177  261568 provision.go:177] copyRemoteCerts
	I1228 06:56:55.373236  261568 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1228 06:56:55.373295  261568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-500581
	I1228 06:56:55.392900  261568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/default-k8s-diff-port-500581/id_rsa Username:docker}
	I1228 06:56:55.486399  261568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1228 06:56:55.505187  261568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1228 06:56:55.522853  261568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1228 06:56:55.540417  261568 provision.go:87] duration metric: took 233.223896ms to configureAuth
	I1228 06:56:55.540444  261568 ubuntu.go:206] setting minikube options for container-runtime
	I1228 06:56:55.540674  261568 config.go:182] Loaded profile config "default-k8s-diff-port-500581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:56:55.540784  261568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-500581
	I1228 06:56:55.560885  261568 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:55.561205  261568 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1228 06:56:55.561248  261568 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1228 06:56:55.912261  261568 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1228 06:56:55.912292  261568 machine.go:97] duration metric: took 4.075596904s to provisionDockerMachine
	I1228 06:56:55.912309  261568 start.go:293] postStartSetup for "default-k8s-diff-port-500581" (driver="docker")
	I1228 06:56:55.912323  261568 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1228 06:56:55.912405  261568 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1228 06:56:55.912473  261568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-500581
	I1228 06:56:55.934789  261568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/default-k8s-diff-port-500581/id_rsa Username:docker}
	I1228 06:56:56.028978  261568 ssh_runner.go:195] Run: cat /etc/os-release
	I1228 06:56:56.033725  261568 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1228 06:56:56.033788  261568 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1228 06:56:56.033803  261568 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-5550/.minikube/addons for local assets ...
	I1228 06:56:56.033860  261568 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-5550/.minikube/files for local assets ...
	I1228 06:56:56.033970  261568 filesync.go:149] local asset: /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem -> 90762.pem in /etc/ssl/certs
	I1228 06:56:56.034118  261568 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1228 06:56:56.043909  261568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem --> /etc/ssl/certs/90762.pem (1708 bytes)
	I1228 06:56:56.068426  261568 start.go:296] duration metric: took 156.102069ms for postStartSetup
	I1228 06:56:56.068509  261568 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 06:56:56.068568  261568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-500581
	I1228 06:56:56.094504  261568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/default-k8s-diff-port-500581/id_rsa Username:docker}
	I1228 06:56:56.186274  261568 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1228 06:56:56.192245  261568 fix.go:56] duration metric: took 4.684422638s for fixHost
	I1228 06:56:56.192269  261568 start.go:83] releasing machines lock for "default-k8s-diff-port-500581", held for 4.684465564s
	I1228 06:56:56.192339  261568 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-500581
	I1228 06:56:56.215984  261568 ssh_runner.go:195] Run: cat /version.json
	I1228 06:56:56.216056  261568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-500581
	I1228 06:56:56.216085  261568 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1228 06:56:56.216168  261568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-500581
	I1228 06:56:56.236830  261568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/default-k8s-diff-port-500581/id_rsa Username:docker}
	I1228 06:56:56.237219  261568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/default-k8s-diff-port-500581/id_rsa Username:docker}
	I1228 06:56:55.636809  260283 cli_runner.go:164] Run: docker network inspect embed-certs-422591 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 06:56:55.657292  260283 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1228 06:56:55.661351  260283 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 06:56:55.671982  260283 kubeadm.go:884] updating cluster {Name:embed-certs-422591 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-422591 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 06:56:55.672135  260283 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1228 06:56:55.672197  260283 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 06:56:55.717231  260283 crio.go:631] all images are preloaded for cri-o runtime.
	I1228 06:56:55.717252  260283 crio.go:503] Images already preloaded, skipping extraction
	I1228 06:56:55.717304  260283 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 06:56:55.750510  260283 crio.go:631] all images are preloaded for cri-o runtime.
	I1228 06:56:55.750537  260283 cache_images.go:86] Images are preloaded, skipping loading
	I1228 06:56:55.750545  260283 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I1228 06:56:55.750638  260283 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-422591 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:embed-certs-422591 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 06:56:55.750697  260283 ssh_runner.go:195] Run: crio config
	I1228 06:56:55.798757  260283 cni.go:84] Creating CNI manager for ""
	I1228 06:56:55.798781  260283 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1228 06:56:55.798794  260283 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1228 06:56:55.798816  260283 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-422591 NodeName:embed-certs-422591 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 06:56:55.798981  260283 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-422591"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 06:56:55.799071  260283 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1228 06:56:55.808067  260283 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 06:56:55.808139  260283 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 06:56:55.816236  260283 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1228 06:56:55.830081  260283 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 06:56:55.844082  260283 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1228 06:56:55.857168  260283 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1228 06:56:55.861349  260283 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 06:56:55.872967  260283 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:56:55.969484  260283 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:56:55.991172  260283 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/embed-certs-422591 for IP: 192.168.76.2
	I1228 06:56:55.991194  260283 certs.go:195] generating shared ca certs ...
	I1228 06:56:55.991213  260283 certs.go:227] acquiring lock for ca certs: {Name:mk77ee411d20e2d367f536371cb4debf1ce5f664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:55.991369  260283 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-5550/.minikube/ca.key
	I1228 06:56:55.991423  260283 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.key
	I1228 06:56:55.991435  260283 certs.go:257] generating profile certs ...
	I1228 06:56:55.991549  260283 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/embed-certs-422591/client.key
	I1228 06:56:55.991631  260283 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/embed-certs-422591/apiserver.key.3be22f86
	I1228 06:56:55.991682  260283 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/embed-certs-422591/proxy-client.key
	I1228 06:56:55.991823  260283 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076.pem (1338 bytes)
	W1228 06:56:55.991865  260283 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076_empty.pem, impossibly tiny 0 bytes
	I1228 06:56:55.991877  260283 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem (1675 bytes)
	I1228 06:56:55.991914  260283 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem (1082 bytes)
	I1228 06:56:55.991950  260283 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem (1123 bytes)
	I1228 06:56:55.991981  260283 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem (1679 bytes)
	I1228 06:56:55.992051  260283 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem (1708 bytes)
	I1228 06:56:55.992737  260283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 06:56:56.012567  260283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1228 06:56:56.034343  260283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 06:56:56.057165  260283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 06:56:56.079350  260283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/embed-certs-422591/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1228 06:56:56.103893  260283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/embed-certs-422591/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1228 06:56:56.123746  260283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/embed-certs-422591/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 06:56:56.141940  260283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/embed-certs-422591/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1228 06:56:56.160463  260283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem --> /usr/share/ca-certificates/90762.pem (1708 bytes)
	I1228 06:56:56.177728  260283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 06:56:56.199019  260283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076.pem --> /usr/share/ca-certificates/9076.pem (1338 bytes)
	I1228 06:56:56.220395  260283 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 06:56:56.235535  260283 ssh_runner.go:195] Run: openssl version
	I1228 06:56:56.242495  260283 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/90762.pem
	I1228 06:56:56.250951  260283 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/90762.pem /etc/ssl/certs/90762.pem
	I1228 06:56:56.260106  260283 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/90762.pem
	I1228 06:56:56.264522  260283 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:31 /usr/share/ca-certificates/90762.pem
	I1228 06:56:56.264582  260283 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/90762.pem
	I1228 06:56:56.302672  260283 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 06:56:56.310442  260283 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:56.318190  260283 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 06:56:56.326937  260283 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:56.330782  260283 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:28 /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:56.330838  260283 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:56.366947  260283 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 06:56:56.374588  260283 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9076.pem
	I1228 06:56:56.382855  260283 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9076.pem /etc/ssl/certs/9076.pem
	I1228 06:56:56.392178  260283 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9076.pem
	I1228 06:56:56.400669  260283 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:31 /usr/share/ca-certificates/9076.pem
	I1228 06:56:56.400781  260283 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9076.pem
	I1228 06:56:56.443361  260283 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 06:56:56.451380  260283 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 06:56:56.455260  260283 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1228 06:56:56.493195  260283 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1228 06:56:56.552322  260283 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1228 06:56:56.610967  260283 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1228 06:56:56.678082  260283 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1228 06:56:56.744904  260283 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1228 06:56:56.802976  260283 kubeadm.go:401] StartCluster: {Name:embed-certs-422591 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-422591 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:56:56.803131  260283 ssh_runner.go:195] Run: sudo crio config
	I1228 06:56:56.887317  260283 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	W1228 06:56:56.902690  260283 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:56Z" level=error msg="open /run/runc: no such file or directory"
	I1228 06:56:56.902780  260283 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 06:56:56.911889  260283 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1228 06:56:56.911919  260283 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1228 06:56:56.911966  260283 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1228 06:56:56.921385  260283 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1228 06:56:56.922175  260283 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-422591" does not appear in /home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:56:56.922628  260283 kubeconfig.go:62] /home/jenkins/minikube-integration/22352-5550/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-422591" cluster setting kubeconfig missing "embed-certs-422591" context setting]
	I1228 06:56:56.923248  260283 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/kubeconfig: {Name:mke0b45a06b272ee5aa493b62cdbe6f2a53c0aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:56.924994  260283 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1228 06:56:56.935152  260283 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1228 06:56:56.935190  260283 kubeadm.go:602] duration metric: took 23.263516ms to restartPrimaryControlPlane
	I1228 06:56:56.935207  260283 kubeadm.go:403] duration metric: took 132.238201ms to StartCluster
	I1228 06:56:56.935226  260283 settings.go:142] acquiring lock: {Name:mk84c1d4c127eaf11c7cc5cc16de86f86f2dcbe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:56.935306  260283 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:56:56.936685  260283 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/kubeconfig: {Name:mke0b45a06b272ee5aa493b62cdbe6f2a53c0aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:56.936960  260283 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1228 06:56:56.937200  260283 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1228 06:56:56.937287  260283 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-422591"
	I1228 06:56:56.937304  260283 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-422591"
	W1228 06:56:56.937311  260283 addons.go:248] addon storage-provisioner should already be in state true
	I1228 06:56:56.937316  260283 config.go:182] Loaded profile config "embed-certs-422591": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:56:56.937338  260283 host.go:66] Checking if "embed-certs-422591" exists ...
	I1228 06:56:56.937426  260283 addons.go:70] Setting default-storageclass=true in profile "embed-certs-422591"
	I1228 06:56:56.937441  260283 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-422591"
	I1228 06:56:56.937706  260283 cli_runner.go:164] Run: docker container inspect embed-certs-422591 --format={{.State.Status}}
	I1228 06:56:56.937808  260283 cli_runner.go:164] Run: docker container inspect embed-certs-422591 --format={{.State.Status}}
	I1228 06:56:56.937839  260283 addons.go:70] Setting dashboard=true in profile "embed-certs-422591"
	I1228 06:56:56.937859  260283 addons.go:239] Setting addon dashboard=true in "embed-certs-422591"
	W1228 06:56:56.937868  260283 addons.go:248] addon dashboard should already be in state true
	I1228 06:56:56.937892  260283 host.go:66] Checking if "embed-certs-422591" exists ...
	I1228 06:56:56.938390  260283 cli_runner.go:164] Run: docker container inspect embed-certs-422591 --format={{.State.Status}}
	I1228 06:56:56.939110  260283 out.go:179] * Verifying Kubernetes components...
	I1228 06:56:56.940441  260283 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:56:56.975612  260283 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1228 06:56:56.976794  260283 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:56:56.976818  260283 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1228 06:56:56.976876  260283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422591
	I1228 06:56:56.981192  260283 addons.go:239] Setting addon default-storageclass=true in "embed-certs-422591"
	W1228 06:56:56.981219  260283 addons.go:248] addon default-storageclass should already be in state true
	I1228 06:56:56.981245  260283 host.go:66] Checking if "embed-certs-422591" exists ...
	I1228 06:56:56.981694  260283 cli_runner.go:164] Run: docker container inspect embed-certs-422591 --format={{.State.Status}}
	I1228 06:56:56.982695  260283 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1228 06:56:56.984042  260283 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1228 06:56:55.347118  260915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1228 06:56:55.367880  260915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1228 06:56:55.385829  260915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1228 06:56:55.405574  260915 provision.go:87] duration metric: took 250.043655ms to configureAuth
	I1228 06:56:55.405599  260915 ubuntu.go:206] setting minikube options for container-runtime
	I1228 06:56:55.405793  260915 config.go:182] Loaded profile config "newest-cni-479871": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:56:55.405923  260915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:56:55.426557  260915 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:55.426761  260915 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1228 06:56:55.426777  260915 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1228 06:56:55.707096  260915 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1228 06:56:55.707127  260915 machine.go:97] duration metric: took 1.025985439s to provisionDockerMachine
	I1228 06:56:55.707141  260915 client.go:176] duration metric: took 5.226772639s to LocalClient.Create
	I1228 06:56:55.707163  260915 start.go:167] duration metric: took 5.226863018s to libmachine.API.Create "newest-cni-479871"
	I1228 06:56:55.707178  260915 start.go:293] postStartSetup for "newest-cni-479871" (driver="docker")
	I1228 06:56:55.707191  260915 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1228 06:56:55.707328  260915 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1228 06:56:55.707387  260915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:56:55.730590  260915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/newest-cni-479871/id_rsa Username:docker}
	I1228 06:56:55.828324  260915 ssh_runner.go:195] Run: cat /etc/os-release
	I1228 06:56:55.832265  260915 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1228 06:56:55.832288  260915 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1228 06:56:55.832299  260915 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-5550/.minikube/addons for local assets ...
	I1228 06:56:55.832350  260915 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-5550/.minikube/files for local assets ...
	I1228 06:56:55.832419  260915 filesync.go:149] local asset: /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem -> 90762.pem in /etc/ssl/certs
	I1228 06:56:55.832512  260915 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1228 06:56:55.839863  260915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem --> /etc/ssl/certs/90762.pem (1708 bytes)
	I1228 06:56:55.861613  260915 start.go:296] duration metric: took 154.42382ms for postStartSetup
	I1228 06:56:55.861983  260915 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-479871
	I1228 06:56:55.882165  260915 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/config.json ...
	I1228 06:56:55.882431  260915 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 06:56:55.882487  260915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:56:55.907110  260915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/newest-cni-479871/id_rsa Username:docker}
	I1228 06:56:56.002055  260915 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1228 06:56:56.007480  260915 start.go:128] duration metric: took 5.529512048s to createHost
	I1228 06:56:56.007505  260915 start.go:83] releasing machines lock for "newest-cni-479871", held for 5.529670542s
	I1228 06:56:56.007573  260915 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-479871
	I1228 06:56:56.029672  260915 ssh_runner.go:195] Run: cat /version.json
	I1228 06:56:56.029705  260915 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1228 06:56:56.029725  260915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:56:56.029776  260915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:56:56.055251  260915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/newest-cni-479871/id_rsa Username:docker}
	I1228 06:56:56.056879  260915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/newest-cni-479871/id_rsa Username:docker}
	I1228 06:56:56.223588  260915 ssh_runner.go:195] Run: systemctl --version
	I1228 06:56:56.231474  260915 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1228 06:56:56.270713  260915 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1228 06:56:56.275245  260915 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1228 06:56:56.275311  260915 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1228 06:56:56.303121  260915 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1228 06:56:56.303143  260915 start.go:496] detecting cgroup driver to use...
	I1228 06:56:56.303180  260915 detect.go:190] detected "systemd" cgroup driver on host os
	I1228 06:56:56.303231  260915 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1228 06:56:56.319367  260915 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1228 06:56:56.332383  260915 docker.go:218] disabling cri-docker service (if available) ...
	I1228 06:56:56.332437  260915 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1228 06:56:56.349611  260915 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1228 06:56:56.366740  260915 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1228 06:56:56.458933  260915 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1228 06:56:56.581970  260915 docker.go:234] disabling docker service ...
	I1228 06:56:56.582057  260915 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1228 06:56:56.611636  260915 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1228 06:56:56.629973  260915 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1228 06:56:56.778762  260915 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1228 06:56:56.898948  260915 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1228 06:56:56.915292  260915 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 06:56:56.936739  260915 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1228 06:56:56.936802  260915 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:56.957436  260915 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1228 06:56:56.957511  260915 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:56.970285  260915 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:56.991323  260915 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:57.012351  260915 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1228 06:56:57.030720  260915 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:57.044267  260915 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:57.063444  260915 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:57.076260  260915 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1228 06:56:57.086701  260915 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1228 06:56:57.094844  260915 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:56:57.197445  260915 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1228 06:56:57.376208  260915 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1228 06:56:57.376288  260915 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1228 06:56:57.381285  260915 start.go:574] Will wait 60s for crictl version
	I1228 06:56:57.381333  260915 ssh_runner.go:195] Run: which crictl
	I1228 06:56:57.386277  260915 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1228 06:56:57.416647  260915 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1228 06:56:57.416739  260915 ssh_runner.go:195] Run: crio --version
	I1228 06:56:57.451001  260915 ssh_runner.go:195] Run: crio --version
	I1228 06:56:57.487677  260915 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1228 06:56:57.488839  260915 cli_runner.go:164] Run: docker network inspect newest-cni-479871 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 06:56:57.510156  260915 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1228 06:56:57.515131  260915 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 06:56:57.529473  260915 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1228 06:56:56.380475  261568 ssh_runner.go:195] Run: systemctl --version
	I1228 06:56:56.388512  261568 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1228 06:56:56.432498  261568 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1228 06:56:56.437345  261568 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1228 06:56:56.437405  261568 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1228 06:56:56.445717  261568 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1228 06:56:56.445738  261568 start.go:496] detecting cgroup driver to use...
	I1228 06:56:56.445770  261568 detect.go:190] detected "systemd" cgroup driver on host os
	I1228 06:56:56.445818  261568 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1228 06:56:56.460887  261568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1228 06:56:56.472988  261568 docker.go:218] disabling cri-docker service (if available) ...
	I1228 06:56:56.473075  261568 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1228 06:56:56.488438  261568 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1228 06:56:56.505894  261568 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1228 06:56:56.621379  261568 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1228 06:56:56.764198  261568 docker.go:234] disabling docker service ...
	I1228 06:56:56.764262  261568 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1228 06:56:56.784627  261568 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1228 06:56:56.801487  261568 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1228 06:56:56.935018  261568 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1228 06:56:57.099832  261568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1228 06:56:57.114590  261568 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 06:56:57.138584  261568 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1228 06:56:57.138648  261568 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:57.149353  261568 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1228 06:56:57.149428  261568 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:57.160151  261568 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:57.171588  261568 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:57.182489  261568 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1228 06:56:57.193579  261568 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:57.206803  261568 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:57.219708  261568 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:57.230493  261568 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1228 06:56:57.241799  261568 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1228 06:56:57.254056  261568 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:56:57.353683  261568 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1228 06:56:57.510586  261568 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1228 06:56:57.510663  261568 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1228 06:56:57.515591  261568 start.go:574] Will wait 60s for crictl version
	I1228 06:56:57.515660  261568 ssh_runner.go:195] Run: which crictl
	I1228 06:56:57.520214  261568 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1228 06:56:57.552121  261568 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1228 06:56:57.552210  261568 ssh_runner.go:195] Run: crio --version
	I1228 06:56:57.588059  261568 ssh_runner.go:195] Run: crio --version
	I1228 06:56:57.633785  261568 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	W1228 06:56:55.687228  252331 pod_ready.go:104] pod "coredns-7d764666f9-npk6g" is not "Ready", error: <nil>
	I1228 06:56:56.187608  252331 pod_ready.go:94] pod "coredns-7d764666f9-npk6g" is "Ready"
	I1228 06:56:56.187639  252331 pod_ready.go:86] duration metric: took 35.50648982s for pod "coredns-7d764666f9-npk6g" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:56.190301  252331 pod_ready.go:83] waiting for pod "etcd-no-preload-950460" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:56.194625  252331 pod_ready.go:94] pod "etcd-no-preload-950460" is "Ready"
	I1228 06:56:56.194650  252331 pod_ready.go:86] duration metric: took 4.324521ms for pod "etcd-no-preload-950460" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:56.196770  252331 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-950460" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:56.200996  252331 pod_ready.go:94] pod "kube-apiserver-no-preload-950460" is "Ready"
	I1228 06:56:56.201021  252331 pod_ready.go:86] duration metric: took 4.22637ms for pod "kube-apiserver-no-preload-950460" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:56.203067  252331 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-950460" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:56.386984  252331 pod_ready.go:94] pod "kube-controller-manager-no-preload-950460" is "Ready"
	I1228 06:56:56.387016  252331 pod_ready.go:86] duration metric: took 183.928403ms for pod "kube-controller-manager-no-preload-950460" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:56.586132  252331 pod_ready.go:83] waiting for pod "kube-proxy-294rn" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:56.998562  252331 pod_ready.go:94] pod "kube-proxy-294rn" is "Ready"
	I1228 06:56:56.998589  252331 pod_ready.go:86] duration metric: took 412.431002ms for pod "kube-proxy-294rn" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:57.186108  252331 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-950460" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:57.585825  252331 pod_ready.go:94] pod "kube-scheduler-no-preload-950460" is "Ready"
	I1228 06:56:57.585854  252331 pod_ready.go:86] duration metric: took 399.717455ms for pod "kube-scheduler-no-preload-950460" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:57.585870  252331 pod_ready.go:40] duration metric: took 36.908067526s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 06:56:57.640725  252331 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1228 06:56:57.643532  252331 out.go:179] * Done! kubectl is now configured to use "no-preload-950460" cluster and "default" namespace by default
	I1228 06:56:56.986006  260283 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1228 06:56:56.986182  260283 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1228 06:56:56.986292  260283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422591
	I1228 06:56:57.021254  260283 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/embed-certs-422591/id_rsa Username:docker}
	I1228 06:56:57.026470  260283 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1228 06:56:57.026498  260283 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1228 06:56:57.026561  260283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422591
	I1228 06:56:57.032342  260283 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/embed-certs-422591/id_rsa Username:docker}
	I1228 06:56:57.052347  260283 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/embed-certs-422591/id_rsa Username:docker}
	I1228 06:56:57.114798  260283 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:56:57.136438  260283 node_ready.go:35] waiting up to 6m0s for node "embed-certs-422591" to be "Ready" ...
	I1228 06:56:57.143093  260283 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:56:57.146367  260283 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1228 06:56:57.146437  260283 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1228 06:56:57.159893  260283 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1228 06:56:57.162997  260283 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1228 06:56:57.163021  260283 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1228 06:56:57.178802  260283 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1228 06:56:57.178824  260283 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1228 06:56:57.195442  260283 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1228 06:56:57.195462  260283 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1228 06:56:57.215683  260283 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1228 06:56:57.215712  260283 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1228 06:56:57.234390  260283 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1228 06:56:57.234464  260283 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1228 06:56:57.250624  260283 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1228 06:56:57.250659  260283 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1228 06:56:57.269371  260283 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1228 06:56:57.269405  260283 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1228 06:56:57.287286  260283 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 06:56:57.287318  260283 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1228 06:56:57.303823  260283 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 06:56:58.341277  260283 node_ready.go:49] node "embed-certs-422591" is "Ready"
	I1228 06:56:58.341486  260283 node_ready.go:38] duration metric: took 1.204996046s for node "embed-certs-422591" to be "Ready" ...
	I1228 06:56:58.341543  260283 api_server.go:52] waiting for apiserver process to appear ...
	I1228 06:56:58.341625  260283 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 06:56:59.079200  260283 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.9360724s)
	I1228 06:56:59.079284  260283 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.919364076s)
	I1228 06:56:59.079836  260283 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.77597476s)
	I1228 06:56:59.079928  260283 api_server.go:72] duration metric: took 2.142935627s to wait for apiserver process to appear ...
	I1228 06:56:59.080185  260283 api_server.go:88] waiting for apiserver healthz status ...
	I1228 06:56:59.080283  260283 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1228 06:56:59.081622  260283 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-422591 addons enable metrics-server
	
	I1228 06:56:59.086704  260283 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1228 06:56:59.086730  260283 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1228 06:56:59.096150  260283 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1228 06:56:57.634878  261568 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-500581 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 06:56:57.656475  261568 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1228 06:56:57.662868  261568 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 06:56:57.680225  261568 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-500581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-500581 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 06:56:57.680387  261568 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1228 06:56:57.680441  261568 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 06:56:57.725731  261568 crio.go:631] all images are preloaded for cri-o runtime.
	I1228 06:56:57.725752  261568 crio.go:503] Images already preloaded, skipping extraction
	I1228 06:56:57.725791  261568 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 06:56:57.758843  261568 crio.go:631] all images are preloaded for cri-o runtime.
	I1228 06:56:57.758867  261568 cache_images.go:86] Images are preloaded, skipping loading
	I1228 06:56:57.758878  261568 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.35.0 crio true true} ...
	I1228 06:56:57.759067  261568 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-500581 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-500581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 06:56:57.759165  261568 ssh_runner.go:195] Run: crio config
	I1228 06:56:57.825229  261568 cni.go:84] Creating CNI manager for ""
	I1228 06:56:57.825249  261568 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1228 06:56:57.825263  261568 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1228 06:56:57.825283  261568 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-500581 NodeName:default-k8s-diff-port-500581 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 06:56:57.825427  261568 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-500581"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 06:56:57.825488  261568 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1228 06:56:57.834015  261568 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 06:56:57.834104  261568 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 06:56:57.842957  261568 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1228 06:56:57.861130  261568 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 06:56:57.875931  261568 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1228 06:56:57.890937  261568 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1228 06:56:57.894724  261568 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 06:56:57.904606  261568 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:56:58.027677  261568 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:56:58.050675  261568 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/default-k8s-diff-port-500581 for IP: 192.168.103.2
	I1228 06:56:58.050696  261568 certs.go:195] generating shared ca certs ...
	I1228 06:56:58.050715  261568 certs.go:227] acquiring lock for ca certs: {Name:mk77ee411d20e2d367f536371cb4debf1ce5f664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:58.050893  261568 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-5550/.minikube/ca.key
	I1228 06:56:58.050947  261568 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.key
	I1228 06:56:58.050958  261568 certs.go:257] generating profile certs ...
	I1228 06:56:58.051080  261568 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/default-k8s-diff-port-500581/client.key
	I1228 06:56:58.051160  261568 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/default-k8s-diff-port-500581/apiserver.key.4e0fc9ea
	I1228 06:56:58.051212  261568 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/default-k8s-diff-port-500581/proxy-client.key
	I1228 06:56:58.051319  261568 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076.pem (1338 bytes)
	W1228 06:56:58.051361  261568 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076_empty.pem, impossibly tiny 0 bytes
	I1228 06:56:58.051375  261568 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem (1675 bytes)
	I1228 06:56:58.051416  261568 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem (1082 bytes)
	I1228 06:56:58.051453  261568 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem (1123 bytes)
	I1228 06:56:58.051491  261568 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem (1679 bytes)
	I1228 06:56:58.051540  261568 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem (1708 bytes)
	I1228 06:56:58.052173  261568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 06:56:58.074301  261568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1228 06:56:58.094763  261568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 06:56:58.114646  261568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 06:56:58.151474  261568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/default-k8s-diff-port-500581/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1228 06:56:58.178111  261568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/default-k8s-diff-port-500581/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1228 06:56:58.196129  261568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/default-k8s-diff-port-500581/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 06:56:58.225303  261568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/default-k8s-diff-port-500581/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1228 06:56:58.252987  261568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076.pem --> /usr/share/ca-certificates/9076.pem (1338 bytes)
	I1228 06:56:58.275157  261568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem --> /usr/share/ca-certificates/90762.pem (1708 bytes)
	I1228 06:56:58.292772  261568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 06:56:58.324117  261568 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 06:56:58.344196  261568 ssh_runner.go:195] Run: openssl version
	I1228 06:56:58.359329  261568 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9076.pem
	I1228 06:56:58.373180  261568 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9076.pem /etc/ssl/certs/9076.pem
	I1228 06:56:58.388547  261568 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9076.pem
	I1228 06:56:58.397646  261568 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:31 /usr/share/ca-certificates/9076.pem
	I1228 06:56:58.397716  261568 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9076.pem
	I1228 06:56:58.463000  261568 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 06:56:58.472957  261568 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/90762.pem
	I1228 06:56:58.482337  261568 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/90762.pem /etc/ssl/certs/90762.pem
	I1228 06:56:58.493234  261568 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/90762.pem
	I1228 06:56:58.497494  261568 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:31 /usr/share/ca-certificates/90762.pem
	I1228 06:56:58.497554  261568 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/90762.pem
	I1228 06:56:58.554499  261568 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 06:56:58.563535  261568 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:58.571433  261568 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 06:56:58.580593  261568 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:58.586440  261568 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:28 /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:58.586531  261568 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:58.645335  261568 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 06:56:58.658570  261568 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 06:56:58.664780  261568 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1228 06:56:58.731559  261568 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1228 06:56:58.794292  261568 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1228 06:56:58.854366  261568 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1228 06:56:58.912352  261568 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1228 06:56:58.971537  261568 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1228 06:56:59.020042  261568 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-500581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-500581 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:56:59.020173  261568 ssh_runner.go:195] Run: sudo crio config
	I1228 06:56:59.077797  261568 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	W1228 06:56:59.092934  261568 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:59Z" level=error msg="open /run/runc: no such file or directory"
	I1228 06:56:59.093006  261568 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 06:56:59.104271  261568 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1228 06:56:59.104290  261568 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1228 06:56:59.104344  261568 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1228 06:56:59.114137  261568 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1228 06:56:59.115134  261568 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-500581" does not appear in /home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:56:59.115666  261568 kubeconfig.go:62] /home/jenkins/minikube-integration/22352-5550/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-500581" cluster setting kubeconfig missing "default-k8s-diff-port-500581" context setting]
	I1228 06:56:59.116519  261568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/kubeconfig: {Name:mke0b45a06b272ee5aa493b62cdbe6f2a53c0aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:59.118500  261568 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1228 06:56:59.129715  261568 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1228 06:56:59.129755  261568 kubeadm.go:602] duration metric: took 25.457297ms to restartPrimaryControlPlane
	I1228 06:56:59.129767  261568 kubeadm.go:403] duration metric: took 109.746452ms to StartCluster
	I1228 06:56:59.129787  261568 settings.go:142] acquiring lock: {Name:mk84c1d4c127eaf11c7cc5cc16de86f86f2dcbe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:59.129865  261568 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:56:59.131990  261568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/kubeconfig: {Name:mke0b45a06b272ee5aa493b62cdbe6f2a53c0aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:59.132237  261568 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1228 06:56:59.132306  261568 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1228 06:56:59.132422  261568 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-500581"
	I1228 06:56:59.132442  261568 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-500581"
	I1228 06:56:59.132440  261568 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-500581"
	I1228 06:56:59.132458  261568 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-500581"
	I1228 06:56:59.132466  261568 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-500581"
	I1228 06:56:59.132472  261568 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-500581"
	I1228 06:56:59.132501  261568 config.go:182] Loaded profile config "default-k8s-diff-port-500581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	W1228 06:56:59.132476  261568 addons.go:248] addon dashboard should already be in state true
	I1228 06:56:59.132606  261568 host.go:66] Checking if "default-k8s-diff-port-500581" exists ...
	W1228 06:56:59.132451  261568 addons.go:248] addon storage-provisioner should already be in state true
	I1228 06:56:59.132643  261568 host.go:66] Checking if "default-k8s-diff-port-500581" exists ...
	I1228 06:56:59.132804  261568 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-500581 --format={{.State.Status}}
	I1228 06:56:59.133076  261568 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-500581 --format={{.State.Status}}
	I1228 06:56:59.133196  261568 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-500581 --format={{.State.Status}}
	I1228 06:56:59.134412  261568 out.go:179] * Verifying Kubernetes components...
	I1228 06:56:59.135423  261568 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:56:59.160990  261568 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-500581"
	W1228 06:56:59.161019  261568 addons.go:248] addon default-storageclass should already be in state true
	I1228 06:56:59.161062  261568 host.go:66] Checking if "default-k8s-diff-port-500581" exists ...
	I1228 06:56:59.161632  261568 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-500581 --format={{.State.Status}}
	I1228 06:56:59.164387  261568 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1228 06:56:59.164457  261568 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1228 06:56:59.165776  261568 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:56:59.165796  261568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1228 06:56:59.165854  261568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-500581
	I1228 06:56:59.166051  261568 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1228 06:56:57.530689  260915 kubeadm.go:884] updating cluster {Name:newest-cni-479871 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-479871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 06:56:57.530879  260915 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1228 06:56:57.530955  260915 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 06:56:57.573400  260915 crio.go:631] all images are preloaded for cri-o runtime.
	I1228 06:56:57.573424  260915 crio.go:503] Images already preloaded, skipping extraction
	I1228 06:56:57.573472  260915 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 06:56:57.605727  260915 crio.go:631] all images are preloaded for cri-o runtime.
	I1228 06:56:57.605749  260915 cache_images.go:86] Images are preloaded, skipping loading
	I1228 06:56:57.605756  260915 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I1228 06:56:57.605895  260915 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-479871 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-479871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 06:56:57.605982  260915 ssh_runner.go:195] Run: crio config
	I1228 06:56:57.674056  260915 cni.go:84] Creating CNI manager for ""
	I1228 06:56:57.674080  260915 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1228 06:56:57.674097  260915 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1228 06:56:57.674130  260915 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-479871 NodeName:newest-cni-479871 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 06:56:57.674294  260915 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-479871"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 06:56:57.674363  260915 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1228 06:56:57.683718  260915 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 06:56:57.683774  260915 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 06:56:57.697208  260915 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1228 06:56:57.714193  260915 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 06:56:57.736019  260915 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1228 06:56:57.752347  260915 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1228 06:56:57.757444  260915 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 06:56:57.770946  260915 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:56:57.879994  260915 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:56:57.907780  260915 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871 for IP: 192.168.85.2
	I1228 06:56:57.907815  260915 certs.go:195] generating shared ca certs ...
	I1228 06:56:57.907835  260915 certs.go:227] acquiring lock for ca certs: {Name:mk77ee411d20e2d367f536371cb4debf1ce5f664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:57.907990  260915 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-5550/.minikube/ca.key
	I1228 06:56:57.908075  260915 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.key
	I1228 06:56:57.908095  260915 certs.go:257] generating profile certs ...
	I1228 06:56:57.908171  260915 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/client.key
	I1228 06:56:57.908190  260915 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/client.crt with IP's: []
	I1228 06:56:57.970315  260915 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/client.crt ...
	I1228 06:56:57.970351  260915 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/client.crt: {Name:mk342ba4e76ceae6509b3a9b3e06bce76a0143fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:57.970558  260915 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/client.key ...
	I1228 06:56:57.970573  260915 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/client.key: {Name:mk6097687692feb30b71900aa35b4aee9faa2acb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:57.970713  260915 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/apiserver.key.37bd9581
	I1228 06:56:57.970751  260915 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/apiserver.crt.37bd9581 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1228 06:56:58.015745  260915 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/apiserver.crt.37bd9581 ...
	I1228 06:56:58.015774  260915 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/apiserver.crt.37bd9581: {Name:mk60335156a565fa5df02e2632a77039efa4fc0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:58.015954  260915 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/apiserver.key.37bd9581 ...
	I1228 06:56:58.015970  260915 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/apiserver.key.37bd9581: {Name:mk63edb29b1d00cff7e6d926b73407d8754bf39a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:58.016080  260915 certs.go:382] copying /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/apiserver.crt.37bd9581 -> /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/apiserver.crt
	I1228 06:56:58.016188  260915 certs.go:386] copying /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/apiserver.key.37bd9581 -> /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/apiserver.key
	I1228 06:56:58.016281  260915 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/proxy-client.key
	I1228 06:56:58.016305  260915 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/proxy-client.crt with IP's: []
	I1228 06:56:58.169217  260915 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/proxy-client.crt ...
	I1228 06:56:58.169306  260915 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/proxy-client.crt: {Name:mk5ba8b17c1f71db6636f0d33f2f72040423ed3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:58.169505  260915 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/proxy-client.key ...
	I1228 06:56:58.169521  260915 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/proxy-client.key: {Name:mk4b0b0f3f2c0acfd0e4e41f4c53c10301c4aab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:58.169760  260915 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076.pem (1338 bytes)
	W1228 06:56:58.169804  260915 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076_empty.pem, impossibly tiny 0 bytes
	I1228 06:56:58.169816  260915 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem (1675 bytes)
	I1228 06:56:58.169857  260915 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem (1082 bytes)
	I1228 06:56:58.169919  260915 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem (1123 bytes)
	I1228 06:56:58.169960  260915 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem (1679 bytes)
	I1228 06:56:58.170023  260915 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem (1708 bytes)
	I1228 06:56:58.170853  260915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 06:56:58.189272  260915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1228 06:56:58.211984  260915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 06:56:58.244360  260915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 06:56:58.268746  260915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1228 06:56:58.287410  260915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1228 06:56:58.315271  260915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 06:56:58.346205  260915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1228 06:56:58.384409  260915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem --> /usr/share/ca-certificates/90762.pem (1708 bytes)
	I1228 06:56:58.419149  260915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 06:56:58.454023  260915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076.pem --> /usr/share/ca-certificates/9076.pem (1338 bytes)
	I1228 06:56:58.476345  260915 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 06:56:58.493349  260915 ssh_runner.go:195] Run: openssl version
	I1228 06:56:58.500769  260915 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/90762.pem
	I1228 06:56:58.510854  260915 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/90762.pem /etc/ssl/certs/90762.pem
	I1228 06:56:58.521404  260915 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/90762.pem
	I1228 06:56:58.526814  260915 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:31 /usr/share/ca-certificates/90762.pem
	I1228 06:56:58.526893  260915 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/90762.pem
	I1228 06:56:58.579536  260915 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 06:56:58.591726  260915 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/90762.pem /etc/ssl/certs/3ec20f2e.0
	I1228 06:56:58.603715  260915 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:58.613518  260915 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 06:56:58.622954  260915 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:58.627431  260915 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:28 /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:58.627487  260915 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:58.687477  260915 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 06:56:58.699073  260915 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1228 06:56:58.710948  260915 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9076.pem
	I1228 06:56:58.722754  260915 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9076.pem /etc/ssl/certs/9076.pem
	I1228 06:56:58.735944  260915 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9076.pem
	I1228 06:56:58.741915  260915 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:31 /usr/share/ca-certificates/9076.pem
	I1228 06:56:58.741988  260915 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9076.pem
	I1228 06:56:58.800642  260915 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 06:56:58.811409  260915 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9076.pem /etc/ssl/certs/51391683.0
	I1228 06:56:58.823986  260915 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 06:56:58.829294  260915 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1228 06:56:58.829413  260915 kubeadm.go:401] StartCluster: {Name:newest-cni-479871 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-479871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:56:58.829571  260915 ssh_runner.go:195] Run: sudo crio config
	I1228 06:56:58.913584  260915 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	W1228 06:56:58.932081  260915 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:58Z" level=error msg="open /run/runc: no such file or directory"
	I1228 06:56:58.932154  260915 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 06:56:58.942180  260915 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1228 06:56:58.953694  260915 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1228 06:56:58.953794  260915 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1228 06:56:58.962855  260915 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1228 06:56:58.962880  260915 kubeadm.go:158] found existing configuration files:
	
	I1228 06:56:58.962926  260915 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1228 06:56:58.972496  260915 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1228 06:56:58.972534  260915 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1228 06:56:58.980676  260915 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1228 06:56:58.991072  260915 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1228 06:56:58.991204  260915 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1228 06:56:58.999651  260915 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1228 06:56:59.008281  260915 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1228 06:56:59.008349  260915 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1228 06:56:59.016399  260915 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1228 06:56:59.024902  260915 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1228 06:56:59.024962  260915 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1228 06:56:59.032507  260915 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1228 06:56:59.203193  260915 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1228 06:56:59.293476  260915 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1228 06:56:59.167161  261568 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1228 06:56:59.167190  261568 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1228 06:56:59.167250  261568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-500581
	I1228 06:56:59.195102  261568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/default-k8s-diff-port-500581/id_rsa Username:docker}
	I1228 06:56:59.209215  261568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/default-k8s-diff-port-500581/id_rsa Username:docker}
	I1228 06:56:59.213140  261568 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1228 06:56:59.213164  261568 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1228 06:56:59.213251  261568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-500581
	I1228 06:56:59.240235  261568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/default-k8s-diff-port-500581/id_rsa Username:docker}
	I1228 06:56:59.296939  261568 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:56:59.314285  261568 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-500581" to be "Ready" ...
	I1228 06:56:59.324792  261568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:56:59.342466  261568 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1228 06:56:59.342613  261568 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1228 06:56:59.359906  261568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1228 06:56:59.364010  261568 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1228 06:56:59.364045  261568 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1228 06:56:59.391472  261568 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1228 06:56:59.391508  261568 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1228 06:56:59.444439  261568 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1228 06:56:59.444465  261568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1228 06:56:59.472399  261568 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1228 06:56:59.472451  261568 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1228 06:56:59.491671  261568 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1228 06:56:59.491775  261568 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1228 06:56:59.516085  261568 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1228 06:56:59.516120  261568 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1228 06:56:59.540413  261568 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1228 06:56:59.540444  261568 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1228 06:56:59.563645  261568 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 06:56:59.563672  261568 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1228 06:56:59.581659  261568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 06:57:00.542003  261568 node_ready.go:49] node "default-k8s-diff-port-500581" is "Ready"
	I1228 06:57:00.542057  261568 node_ready.go:38] duration metric: took 1.227733507s for node "default-k8s-diff-port-500581" to be "Ready" ...
	I1228 06:57:00.542077  261568 api_server.go:52] waiting for apiserver process to appear ...
	I1228 06:57:00.542135  261568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 06:57:01.105548  261568 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.780701527s)
	I1228 06:57:01.105609  261568 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.745665986s)
	I1228 06:57:01.105694  261568 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.523998963s)
	I1228 06:57:01.105746  261568 api_server.go:72] duration metric: took 1.973482037s to wait for apiserver process to appear ...
	I1228 06:57:01.105763  261568 api_server.go:88] waiting for apiserver healthz status ...
	I1228 06:57:01.105885  261568 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1228 06:57:01.107453  261568 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-500581 addons enable metrics-server
	
	I1228 06:57:01.110897  261568 api_server.go:325] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1228 06:57:01.110919  261568 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1228 06:57:01.112410  261568 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1228 06:57:01.113682  261568 addons.go:530] duration metric: took 1.981384906s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1228 06:56:59.097263  260283 addons.go:530] duration metric: took 2.160064919s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1228 06:56:59.581199  260283 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1228 06:56:59.589461  260283 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1228 06:56:59.589517  260283 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1228 06:57:00.081170  260283 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1228 06:57:00.085345  260283 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1228 06:57:00.086368  260283 api_server.go:141] control plane version: v1.35.0
	I1228 06:57:00.086398  260283 api_server.go:131] duration metric: took 1.006128416s to wait for apiserver health ...
	I1228 06:57:00.086409  260283 system_pods.go:43] waiting for kube-system pods to appear ...
	I1228 06:57:00.090076  260283 system_pods.go:59] 8 kube-system pods found
	I1228 06:57:00.090113  260283 system_pods.go:61] "coredns-7d764666f9-dmhdv" [73a84260-cf19-47c9-a23e-616f99cb5f38] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:57:00.090124  260283 system_pods.go:61] "etcd-embed-certs-422591" [fa26dd24-514f-4ada-b710-7e65e52ccd9e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 06:57:00.090138  260283 system_pods.go:61] "kindnet-9zxtp" [e5bd678d-7d09-455f-993e-2fb6a4f02111] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 06:57:00.090151  260283 system_pods.go:61] "kube-apiserver-embed-certs-422591" [935ffe6b-7ba7-4580-b22b-b76cd5469ae8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 06:57:00.090162  260283 system_pods.go:61] "kube-controller-manager-embed-certs-422591" [39f6d823-b0f9-44c0-be26-47e85a60ca63] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 06:57:00.090186  260283 system_pods.go:61] "kube-proxy-j2dkd" [f64a0ce0-4a8f-4d7d-aac7-cce99ec2bdd8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 06:57:00.090199  260283 system_pods.go:61] "kube-scheduler-embed-certs-422591" [ba6d5c13-ea17-4f1d-bcc9-c57e4e7ce995] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 06:57:00.090212  260283 system_pods.go:61] "storage-provisioner" [ac0163fe-8dd0-4650-a401-22a9a9310b5e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:57:00.090223  260283 system_pods.go:74] duration metric: took 3.804246ms to wait for pod list to return data ...
	I1228 06:57:00.090236  260283 default_sa.go:34] waiting for default service account to be created ...
	I1228 06:57:00.092690  260283 default_sa.go:45] found service account: "default"
	I1228 06:57:00.092707  260283 default_sa.go:55] duration metric: took 2.461167ms for default service account to be created ...
	I1228 06:57:00.092720  260283 system_pods.go:116] waiting for k8s-apps to be running ...
	I1228 06:57:00.095179  260283 system_pods.go:86] 8 kube-system pods found
	I1228 06:57:00.095212  260283 system_pods.go:89] "coredns-7d764666f9-dmhdv" [73a84260-cf19-47c9-a23e-616f99cb5f38] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:57:00.095224  260283 system_pods.go:89] "etcd-embed-certs-422591" [fa26dd24-514f-4ada-b710-7e65e52ccd9e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 06:57:00.095245  260283 system_pods.go:89] "kindnet-9zxtp" [e5bd678d-7d09-455f-993e-2fb6a4f02111] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 06:57:00.095258  260283 system_pods.go:89] "kube-apiserver-embed-certs-422591" [935ffe6b-7ba7-4580-b22b-b76cd5469ae8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 06:57:00.095267  260283 system_pods.go:89] "kube-controller-manager-embed-certs-422591" [39f6d823-b0f9-44c0-be26-47e85a60ca63] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 06:57:00.095278  260283 system_pods.go:89] "kube-proxy-j2dkd" [f64a0ce0-4a8f-4d7d-aac7-cce99ec2bdd8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 06:57:00.095286  260283 system_pods.go:89] "kube-scheduler-embed-certs-422591" [ba6d5c13-ea17-4f1d-bcc9-c57e4e7ce995] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 06:57:00.095297  260283 system_pods.go:89] "storage-provisioner" [ac0163fe-8dd0-4650-a401-22a9a9310b5e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:57:00.095307  260283 system_pods.go:126] duration metric: took 2.57702ms to wait for k8s-apps to be running ...
	I1228 06:57:00.095319  260283 system_svc.go:44] waiting for kubelet service to be running ....
	I1228 06:57:00.095369  260283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:57:00.112536  260283 system_svc.go:56] duration metric: took 17.190631ms WaitForService to wait for kubelet
	I1228 06:57:00.112574  260283 kubeadm.go:587] duration metric: took 3.175583293s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 06:57:00.112597  260283 node_conditions.go:102] verifying NodePressure condition ...
	I1228 06:57:00.117248  260283 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1228 06:57:00.117423  260283 node_conditions.go:123] node cpu capacity is 8
	I1228 06:57:00.117486  260283 node_conditions.go:105] duration metric: took 4.86014ms to run NodePressure ...
	I1228 06:57:00.117528  260283 start.go:242] waiting for startup goroutines ...
	I1228 06:57:00.117683  260283 start.go:247] waiting for cluster config update ...
	I1228 06:57:00.117705  260283 start.go:256] writing updated cluster config ...
	I1228 06:57:00.118280  260283 ssh_runner.go:195] Run: rm -f paused
	I1228 06:57:00.124948  260283 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 06:57:00.129371  260283 pod_ready.go:83] waiting for pod "coredns-7d764666f9-dmhdv" in "kube-system" namespace to be "Ready" or be gone ...
	W1228 06:57:02.139240  260283 pod_ready.go:104] pod "coredns-7d764666f9-dmhdv" is not "Ready", error: <nil>
	I1228 06:57:01.606775  261568 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1228 06:57:01.611458  261568 api_server.go:325] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1228 06:57:01.611490  261568 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1228 06:57:02.106018  261568 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1228 06:57:02.112713  261568 api_server.go:325] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1228 06:57:02.114062  261568 api_server.go:141] control plane version: v1.35.0
	I1228 06:57:02.114087  261568 api_server.go:131] duration metric: took 1.008258851s to wait for apiserver health ...
	I1228 06:57:02.114096  261568 system_pods.go:43] waiting for kube-system pods to appear ...
	I1228 06:57:02.118560  261568 system_pods.go:59] 8 kube-system pods found
	I1228 06:57:02.118604  261568 system_pods.go:61] "coredns-7d764666f9-9glh9" [9c46cd6f-643c-4dcd-9ffd-88becb063b24] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:57:02.118620  261568 system_pods.go:61] "etcd-default-k8s-diff-port-500581" [03cb3e5a-cdc5-4c45-983f-05b0f5326abe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 06:57:02.118631  261568 system_pods.go:61] "kindnet-lsrww" [b5bd3c1b-325d-46fe-9378-779822f0ba5b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 06:57:02.118640  261568 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-500581" [8a2ed107-3bb0-4c8c-bdff-d7011ae71058] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 06:57:02.118651  261568 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-500581" [933ddecb-a0d9-44c2-abb5-d7629a8107eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 06:57:02.118660  261568 system_pods.go:61] "kube-proxy-95gmh" [f25e4b21-a201-4838-b7c9-a5fde3304662] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 06:57:02.118668  261568 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-500581" [35caf688-1337-46ff-b025-5d59373ae8e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 06:57:02.118676  261568 system_pods.go:61] "storage-provisioner" [12b3784f-bffe-49c1-8915-2011c07bee4e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:57:02.118685  261568 system_pods.go:74] duration metric: took 4.581477ms to wait for pod list to return data ...
	I1228 06:57:02.118694  261568 default_sa.go:34] waiting for default service account to be created ...
	I1228 06:57:02.122002  261568 default_sa.go:45] found service account: "default"
	I1228 06:57:02.122020  261568 default_sa.go:55] duration metric: took 3.320928ms for default service account to be created ...
	I1228 06:57:02.122039  261568 system_pods.go:116] waiting for k8s-apps to be running ...
	I1228 06:57:02.125517  261568 system_pods.go:86] 8 kube-system pods found
	I1228 06:57:02.125558  261568 system_pods.go:89] "coredns-7d764666f9-9glh9" [9c46cd6f-643c-4dcd-9ffd-88becb063b24] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:57:02.125571  261568 system_pods.go:89] "etcd-default-k8s-diff-port-500581" [03cb3e5a-cdc5-4c45-983f-05b0f5326abe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 06:57:02.125594  261568 system_pods.go:89] "kindnet-lsrww" [b5bd3c1b-325d-46fe-9378-779822f0ba5b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 06:57:02.125607  261568 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-500581" [8a2ed107-3bb0-4c8c-bdff-d7011ae71058] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 06:57:02.125619  261568 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-500581" [933ddecb-a0d9-44c2-abb5-d7629a8107eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 06:57:02.125628  261568 system_pods.go:89] "kube-proxy-95gmh" [f25e4b21-a201-4838-b7c9-a5fde3304662] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 06:57:02.125643  261568 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-500581" [35caf688-1337-46ff-b025-5d59373ae8e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 06:57:02.125650  261568 system_pods.go:89] "storage-provisioner" [12b3784f-bffe-49c1-8915-2011c07bee4e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:57:02.125663  261568 system_pods.go:126] duration metric: took 3.61618ms to wait for k8s-apps to be running ...
	I1228 06:57:02.125675  261568 system_svc.go:44] waiting for kubelet service to be running ....
	I1228 06:57:02.125723  261568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:57:02.146516  261568 system_svc.go:56] duration metric: took 20.829772ms WaitForService to wait for kubelet
	I1228 06:57:02.146548  261568 kubeadm.go:587] duration metric: took 3.014284503s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 06:57:02.146571  261568 node_conditions.go:102] verifying NodePressure condition ...
	I1228 06:57:02.151142  261568 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1228 06:57:02.151173  261568 node_conditions.go:123] node cpu capacity is 8
	I1228 06:57:02.151191  261568 node_conditions.go:105] duration metric: took 4.614814ms to run NodePressure ...
	I1228 06:57:02.151206  261568 start.go:242] waiting for startup goroutines ...
	I1228 06:57:02.151215  261568 start.go:247] waiting for cluster config update ...
	I1228 06:57:02.151228  261568 start.go:256] writing updated cluster config ...
	I1228 06:57:02.151492  261568 ssh_runner.go:195] Run: rm -f paused
	I1228 06:57:02.158502  261568 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 06:57:02.163107  261568 pod_ready.go:83] waiting for pod "coredns-7d764666f9-9glh9" in "kube-system" namespace to be "Ready" or be gone ...
	W1228 06:57:04.168739  261568 pod_ready.go:104] pod "coredns-7d764666f9-9glh9" is not "Ready", error: <nil>
	W1228 06:57:06.170937  261568 pod_ready.go:104] pod "coredns-7d764666f9-9glh9" is not "Ready", error: <nil>
	I1228 06:57:07.273248  260915 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1228 06:57:07.273330  260915 kubeadm.go:319] [preflight] Running pre-flight checks
	I1228 06:57:07.273447  260915 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1228 06:57:07.273543  260915 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1228 06:57:07.273595  260915 kubeadm.go:319] OS: Linux
	I1228 06:57:07.273651  260915 kubeadm.go:319] CGROUPS_CPU: enabled
	I1228 06:57:07.273709  260915 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1228 06:57:07.273771  260915 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1228 06:57:07.273835  260915 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1228 06:57:07.273916  260915 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1228 06:57:07.273992  260915 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1228 06:57:07.274078  260915 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1228 06:57:07.274138  260915 kubeadm.go:319] CGROUPS_IO: enabled
	I1228 06:57:07.274235  260915 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1228 06:57:07.274357  260915 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1228 06:57:07.274477  260915 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1228 06:57:07.274563  260915 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1228 06:57:07.276237  260915 out.go:252]   - Generating certificates and keys ...
	I1228 06:57:07.276338  260915 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1228 06:57:07.276435  260915 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1228 06:57:07.276531  260915 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1228 06:57:07.276613  260915 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1228 06:57:07.276715  260915 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1228 06:57:07.276790  260915 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1228 06:57:07.276871  260915 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1228 06:57:07.277062  260915 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-479871] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1228 06:57:07.277160  260915 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1228 06:57:07.277338  260915 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-479871] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1228 06:57:07.277431  260915 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1228 06:57:07.277519  260915 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1228 06:57:07.277582  260915 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1228 06:57:07.277660  260915 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1228 06:57:07.277726  260915 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1228 06:57:07.277802  260915 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1228 06:57:07.277871  260915 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1228 06:57:07.277975  260915 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1228 06:57:07.278078  260915 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1228 06:57:07.278183  260915 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1228 06:57:07.278271  260915 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1228 06:57:07.279768  260915 out.go:252]   - Booting up control plane ...
	I1228 06:57:07.279971  260915 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1228 06:57:07.280118  260915 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1228 06:57:07.280203  260915 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1228 06:57:07.280341  260915 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1228 06:57:07.280459  260915 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1228 06:57:07.280594  260915 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1228 06:57:07.280705  260915 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1228 06:57:07.280752  260915 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1228 06:57:07.280918  260915 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1228 06:57:07.281066  260915 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1228 06:57:07.281146  260915 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.62379ms
	I1228 06:57:07.281264  260915 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1228 06:57:07.281347  260915 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1228 06:57:07.281414  260915 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1228 06:57:07.281473  260915 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1228 06:57:07.281553  260915 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.006106358s
	I1228 06:57:07.281644  260915 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.100872978s
	I1228 06:57:07.281739  260915 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001834302s
	I1228 06:57:07.281997  260915 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1228 06:57:07.282187  260915 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1228 06:57:07.282270  260915 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1228 06:57:07.282522  260915 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-479871 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1228 06:57:07.282694  260915 kubeadm.go:319] [bootstrap-token] Using token: 1h1kon.f0uwfkf8goxau87f
	I1228 06:57:07.285641  260915 out.go:252]   - Configuring RBAC rules ...
	I1228 06:57:07.285801  260915 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1228 06:57:07.285940  260915 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1228 06:57:07.286155  260915 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1228 06:57:07.286341  260915 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1228 06:57:07.286509  260915 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1228 06:57:07.286626  260915 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1228 06:57:07.286789  260915 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1228 06:57:07.286944  260915 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1228 06:57:07.287022  260915 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1228 06:57:07.287050  260915 kubeadm.go:319] 
	I1228 06:57:07.287134  260915 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1228 06:57:07.287148  260915 kubeadm.go:319] 
	I1228 06:57:07.287240  260915 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1228 06:57:07.287251  260915 kubeadm.go:319] 
	I1228 06:57:07.287284  260915 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1228 06:57:07.287366  260915 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1228 06:57:07.287440  260915 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1228 06:57:07.287451  260915 kubeadm.go:319] 
	I1228 06:57:07.287527  260915 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1228 06:57:07.287537  260915 kubeadm.go:319] 
	I1228 06:57:07.287606  260915 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1228 06:57:07.287615  260915 kubeadm.go:319] 
	I1228 06:57:07.287692  260915 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1228 06:57:07.287797  260915 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1228 06:57:07.287900  260915 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1228 06:57:07.287911  260915 kubeadm.go:319] 
	I1228 06:57:07.288018  260915 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1228 06:57:07.288149  260915 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1228 06:57:07.288163  260915 kubeadm.go:319] 
	I1228 06:57:07.288271  260915 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 1h1kon.f0uwfkf8goxau87f \
	I1228 06:57:07.288398  260915 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6534497fd09654e1c9f62bf7a6763f446292593a08619861d4eab5a65759d2d4 \
	I1228 06:57:07.288433  260915 kubeadm.go:319] 	--control-plane 
	I1228 06:57:07.288450  260915 kubeadm.go:319] 
	I1228 06:57:07.288562  260915 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1228 06:57:07.288578  260915 kubeadm.go:319] 
	I1228 06:57:07.288682  260915 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 1h1kon.f0uwfkf8goxau87f \
	I1228 06:57:07.288837  260915 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6534497fd09654e1c9f62bf7a6763f446292593a08619861d4eab5a65759d2d4 
	I1228 06:57:07.288863  260915 cni.go:84] Creating CNI manager for ""
	I1228 06:57:07.288884  260915 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1228 06:57:07.290538  260915 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1228 06:57:04.636200  260283 pod_ready.go:104] pod "coredns-7d764666f9-dmhdv" is not "Ready", error: <nil>
	W1228 06:57:06.636940  260283 pod_ready.go:104] pod "coredns-7d764666f9-dmhdv" is not "Ready", error: <nil>
	I1228 06:57:07.291873  260915 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1228 06:57:07.298126  260915 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1228 06:57:07.298146  260915 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1228 06:57:07.319436  260915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1228 06:57:07.645417  260915 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1228 06:57:07.645491  260915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-479871 minikube.k8s.io/updated_at=2025_12_28T06_57_07_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba minikube.k8s.io/name=newest-cni-479871 minikube.k8s.io/primary=true
	I1228 06:57:07.645603  260915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:57:07.785117  260915 ops.go:34] apiserver oom_adj: -16
	I1228 06:57:07.785122  260915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:57:08.285590  260915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:57:08.785995  260915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:57:09.285435  260915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:57:09.785188  260915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1228 06:57:08.671402  261568 pod_ready.go:104] pod "coredns-7d764666f9-9glh9" is not "Ready", error: <nil>
	W1228 06:57:10.673458  261568 pod_ready.go:104] pod "coredns-7d764666f9-9glh9" is not "Ready", error: <nil>
	I1228 06:57:10.285783  260915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:57:10.785938  260915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:57:11.285397  260915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:57:11.785451  260915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:57:11.861999  260915 kubeadm.go:1114] duration metric: took 4.216629312s to wait for elevateKubeSystemPrivileges
	I1228 06:57:11.862088  260915 kubeadm.go:403] duration metric: took 13.032677581s to StartCluster
	I1228 06:57:11.862111  260915 settings.go:142] acquiring lock: {Name:mk84c1d4c127eaf11c7cc5cc16de86f86f2dcbe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:57:11.862308  260915 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:57:11.864955  260915 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/kubeconfig: {Name:mke0b45a06b272ee5aa493b62cdbe6f2a53c0aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:57:11.865249  260915 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1228 06:57:11.865367  260915 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1228 06:57:11.865643  260915 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1228 06:57:11.865724  260915 config.go:182] Loaded profile config "newest-cni-479871": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:57:11.865736  260915 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-479871"
	I1228 06:57:11.865753  260915 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-479871"
	I1228 06:57:11.865782  260915 host.go:66] Checking if "newest-cni-479871" exists ...
	I1228 06:57:11.865784  260915 addons.go:70] Setting default-storageclass=true in profile "newest-cni-479871"
	I1228 06:57:11.865799  260915 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-479871"
	I1228 06:57:11.866154  260915 cli_runner.go:164] Run: docker container inspect newest-cni-479871 --format={{.State.Status}}
	I1228 06:57:11.866403  260915 cli_runner.go:164] Run: docker container inspect newest-cni-479871 --format={{.State.Status}}
	I1228 06:57:11.867390  260915 out.go:179] * Verifying Kubernetes components...
	I1228 06:57:11.868587  260915 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:57:11.893738  260915 addons.go:239] Setting addon default-storageclass=true in "newest-cni-479871"
	I1228 06:57:11.893776  260915 host.go:66] Checking if "newest-cni-479871" exists ...
	I1228 06:57:11.894248  260915 cli_runner.go:164] Run: docker container inspect newest-cni-479871 --format={{.State.Status}}
	I1228 06:57:11.894552  260915 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1228 06:57:11.896093  260915 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:57:11.896115  260915 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1228 06:57:11.896177  260915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:57:11.927222  260915 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1228 06:57:11.927248  260915 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1228 06:57:11.927322  260915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:57:11.928405  260915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/newest-cni-479871/id_rsa Username:docker}
	I1228 06:57:11.965132  260915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/newest-cni-479871/id_rsa Username:docker}
	I1228 06:57:11.990719  260915 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1228 06:57:12.039886  260915 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:57:12.049743  260915 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:57:12.086577  260915 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1228 06:57:12.189094  260915 start.go:987] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1228 06:57:12.190197  260915 api_server.go:52] waiting for apiserver process to appear ...
	I1228 06:57:12.190260  260915 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 06:57:12.404986  260915 api_server.go:72] duration metric: took 539.699676ms to wait for apiserver process to appear ...
	I1228 06:57:12.405015  260915 api_server.go:88] waiting for apiserver healthz status ...
	I1228 06:57:12.405067  260915 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1228 06:57:12.410709  260915 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1228 06:57:12.411987  260915 api_server.go:141] control plane version: v1.35.0
	I1228 06:57:12.412017  260915 api_server.go:131] duration metric: took 6.99389ms to wait for apiserver health ...
	I1228 06:57:12.412084  260915 system_pods.go:43] waiting for kube-system pods to appear ...
	I1228 06:57:12.412431  260915 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1228 06:57:12.413844  260915 addons.go:530] duration metric: took 548.200751ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1228 06:57:12.415402  260915 system_pods.go:59] 8 kube-system pods found
	I1228 06:57:12.415430  260915 system_pods.go:61] "coredns-7d764666f9-cqtm4" [80bee88e-62a5-413c-9e2b-0cc274cf605d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1228 06:57:12.415437  260915 system_pods.go:61] "etcd-newest-cni-479871" [8bb011cd-dd9f-4176-b43a-5629132fbf66] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 06:57:12.415446  260915 system_pods.go:61] "kindnet-74fnf" [f610ca19-f52f-41ef-90d7-6ae6b47445da] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 06:57:12.415462  260915 system_pods.go:61] "kube-apiserver-newest-cni-479871" [a83949b2-d4ff-40cb-b0de-d4ba8547a489] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 06:57:12.415469  260915 system_pods.go:61] "kube-controller-manager-newest-cni-479871" [018c9a7d-7992-49db-afd0-8acc014b1976] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 06:57:12.415477  260915 system_pods.go:61] "kube-proxy-kzkbr" [a72ff074-7d43-4ea4-b42a-3a8e5e5fea1d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 06:57:12.415484  260915 system_pods.go:61] "kube-scheduler-newest-cni-479871" [85dcc815-30f1-4c70-a83a-08ca392957f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 06:57:12.415490  260915 system_pods.go:61] "storage-provisioner" [267e9641-510e-4fac-a7f3-97501d5ada65] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1228 06:57:12.415498  260915 system_pods.go:74] duration metric: took 3.401244ms to wait for pod list to return data ...
	I1228 06:57:12.415506  260915 default_sa.go:34] waiting for default service account to be created ...
	I1228 06:57:12.417774  260915 default_sa.go:45] found service account: "default"
	I1228 06:57:12.417795  260915 default_sa.go:55] duration metric: took 2.281764ms for default service account to be created ...
	I1228 06:57:12.417808  260915 kubeadm.go:587] duration metric: took 552.527471ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1228 06:57:12.417828  260915 node_conditions.go:102] verifying NodePressure condition ...
	I1228 06:57:12.420434  260915 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1228 06:57:12.420458  260915 node_conditions.go:123] node cpu capacity is 8
	I1228 06:57:12.420471  260915 node_conditions.go:105] duration metric: took 2.637801ms to run NodePressure ...
	I1228 06:57:12.420484  260915 start.go:242] waiting for startup goroutines ...
	I1228 06:57:12.694659  260915 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-479871" context rescaled to 1 replicas
	I1228 06:57:12.694709  260915 start.go:247] waiting for cluster config update ...
	I1228 06:57:12.694726  260915 start.go:256] writing updated cluster config ...
	I1228 06:57:12.695085  260915 ssh_runner.go:195] Run: rm -f paused
	I1228 06:57:12.764992  260915 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1228 06:57:12.767272  260915 out.go:179] * Done! kubectl is now configured to use "newest-cni-479871" cluster and "default" namespace by default
	W1228 06:57:09.140155  260283 pod_ready.go:104] pod "coredns-7d764666f9-dmhdv" is not "Ready", error: <nil>
	W1228 06:57:11.635975  260283 pod_ready.go:104] pod "coredns-7d764666f9-dmhdv" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 28 06:56:37 no-preload-950460 crio[571]: time="2025-12-28T06:56:37.028693176Z" level=info msg="Started container" PID=1789 containerID=d8f9dfcacb0f2e60fa831c073d13a7dbadd88f838736cbc32ec2b4a54d30e949 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-jczrv/dashboard-metrics-scraper id=ce12700f-a916-4cd1-8ecd-5c41ffae1b1d name=/runtime.v1.RuntimeService/StartContainer sandboxID=88359d28d1d76e1447f1a55227926eb5d3a01e03672de29d3e32104f0c3d03f7
	Dec 28 06:56:37 no-preload-950460 crio[571]: time="2025-12-28T06:56:37.064362639Z" level=info msg="Removing container: d7481796a1e2bb45817a2a841b553bd40cd21b1ca6660d824928d49285266ab0" id=e8c804ee-4550-451d-9aaf-64e35618b0de name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 28 06:56:37 no-preload-950460 crio[571]: time="2025-12-28T06:56:37.075019495Z" level=info msg="Removed container d7481796a1e2bb45817a2a841b553bd40cd21b1ca6660d824928d49285266ab0: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-jczrv/dashboard-metrics-scraper" id=e8c804ee-4550-451d-9aaf-64e35618b0de name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 28 06:56:51 no-preload-950460 crio[571]: time="2025-12-28T06:56:51.099821369Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1add2b67-3a0c-4798-85ee-43855598d1a3 name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:56:51 no-preload-950460 crio[571]: time="2025-12-28T06:56:51.100871601Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d3152332-9e75-4036-909d-6f7d6d30c578 name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:56:51 no-preload-950460 crio[571]: time="2025-12-28T06:56:51.102050678Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=fff877e8-ed85-478d-bffc-503b13e7d38b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:56:51 no-preload-950460 crio[571]: time="2025-12-28T06:56:51.102249291Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:56:51 no-preload-950460 crio[571]: time="2025-12-28T06:56:51.108243226Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:56:51 no-preload-950460 crio[571]: time="2025-12-28T06:56:51.108463389Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/19618db850b10b45bb7445aad15c3c9e9de73c483dd3521696d2a542f52b0801/merged/etc/passwd: no such file or directory"
	Dec 28 06:56:51 no-preload-950460 crio[571]: time="2025-12-28T06:56:51.108488516Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/19618db850b10b45bb7445aad15c3c9e9de73c483dd3521696d2a542f52b0801/merged/etc/group: no such file or directory"
	Dec 28 06:56:51 no-preload-950460 crio[571]: time="2025-12-28T06:56:51.108862616Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:56:51 no-preload-950460 crio[571]: time="2025-12-28T06:56:51.142127993Z" level=info msg="Created container 036e9a1dc89d553d170e9953b427bf1650640d11fb1a6f6d38ff5194f571b590: kube-system/storage-provisioner/storage-provisioner" id=fff877e8-ed85-478d-bffc-503b13e7d38b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:56:51 no-preload-950460 crio[571]: time="2025-12-28T06:56:51.143150333Z" level=info msg="Starting container: 036e9a1dc89d553d170e9953b427bf1650640d11fb1a6f6d38ff5194f571b590" id=76c8e9cc-6172-4aa5-8d68-7d1e8a84ba59 name=/runtime.v1.RuntimeService/StartContainer
	Dec 28 06:56:51 no-preload-950460 crio[571]: time="2025-12-28T06:56:51.145290183Z" level=info msg="Started container" PID=1803 containerID=036e9a1dc89d553d170e9953b427bf1650640d11fb1a6f6d38ff5194f571b590 description=kube-system/storage-provisioner/storage-provisioner id=76c8e9cc-6172-4aa5-8d68-7d1e8a84ba59 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e0cc48cb5b0da8b2d24902541cb7597775b8c3fa8a537e72cd8fa2f551d09e42
	Dec 28 06:57:01 no-preload-950460 crio[571]: time="2025-12-28T06:57:01.987764444Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=16833812-0b71-486c-b5de-74ef9427bde2 name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:57:01 no-preload-950460 crio[571]: time="2025-12-28T06:57:01.988770913Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=83d5199d-e724-4ce6-8083-665d98689124 name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:57:01 no-preload-950460 crio[571]: time="2025-12-28T06:57:01.989912278Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-jczrv/dashboard-metrics-scraper" id=59dcbc32-324e-48b2-8472-69e97bb9de03 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:57:01 no-preload-950460 crio[571]: time="2025-12-28T06:57:01.990087101Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:01 no-preload-950460 crio[571]: time="2025-12-28T06:57:01.996819157Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:01 no-preload-950460 crio[571]: time="2025-12-28T06:57:01.997372891Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:02 no-preload-950460 crio[571]: time="2025-12-28T06:57:02.031938473Z" level=info msg="Created container 8ad3c88b5e19f6fe3d04e764d3c8bba33be52d561ef13d761b7665d7aa2eb1e5: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-jczrv/dashboard-metrics-scraper" id=59dcbc32-324e-48b2-8472-69e97bb9de03 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:57:02 no-preload-950460 crio[571]: time="2025-12-28T06:57:02.032673084Z" level=info msg="Starting container: 8ad3c88b5e19f6fe3d04e764d3c8bba33be52d561ef13d761b7665d7aa2eb1e5" id=4cad8df5-0fa3-4f92-8a58-7f8ca93c16ed name=/runtime.v1.RuntimeService/StartContainer
	Dec 28 06:57:02 no-preload-950460 crio[571]: time="2025-12-28T06:57:02.035007261Z" level=info msg="Started container" PID=1842 containerID=8ad3c88b5e19f6fe3d04e764d3c8bba33be52d561ef13d761b7665d7aa2eb1e5 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-jczrv/dashboard-metrics-scraper id=4cad8df5-0fa3-4f92-8a58-7f8ca93c16ed name=/runtime.v1.RuntimeService/StartContainer sandboxID=88359d28d1d76e1447f1a55227926eb5d3a01e03672de29d3e32104f0c3d03f7
	Dec 28 06:57:02 no-preload-950460 crio[571]: time="2025-12-28T06:57:02.141097215Z" level=info msg="Removing container: d8f9dfcacb0f2e60fa831c073d13a7dbadd88f838736cbc32ec2b4a54d30e949" id=cecd6680-be7b-43aa-ab82-609ad7fb3f7b name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 28 06:57:02 no-preload-950460 crio[571]: time="2025-12-28T06:57:02.153441584Z" level=info msg="Removed container d8f9dfcacb0f2e60fa831c073d13a7dbadd88f838736cbc32ec2b4a54d30e949: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-jczrv/dashboard-metrics-scraper" id=cecd6680-be7b-43aa-ab82-609ad7fb3f7b name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	8ad3c88b5e19f       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           13 seconds ago      Exited              dashboard-metrics-scraper   3                   88359d28d1d76       dashboard-metrics-scraper-867fb5f87b-jczrv   kubernetes-dashboard
	036e9a1dc89d5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         2                   e0cc48cb5b0da       storage-provisioner                          kube-system
	d73d54615acbd       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   46 seconds ago      Running             kubernetes-dashboard        0                   1feb5485f9d16       kubernetes-dashboard-b84665fb8-52cwp         kubernetes-dashboard
	cd155e1fe0251       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           54 seconds ago      Running             coredns                     1                   3f136a590397c       coredns-7d764666f9-npk6g                     kube-system
	e43f1bb42e622       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   0c1b1a82aac8a       busybox                                      default
	fd03d8dbcc76e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         1                   e0cc48cb5b0da       storage-provisioner                          kube-system
	2eb68979e3082       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           54 seconds ago      Running             kindnet-cni                 1                   2c18ae3ba3e96       kindnet-xhb7x                                kube-system
	ab6dccc27bbdf       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                           54 seconds ago      Running             kube-proxy                  1                   e94a130d37e4c       kube-proxy-294rn                             kube-system
	bb07d52b1828d       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                           57 seconds ago      Running             kube-controller-manager     1                   90785ca266249       kube-controller-manager-no-preload-950460    kube-system
	284f317ab1ebb       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                           57 seconds ago      Running             kube-scheduler              1                   d3d8d62c1e4f9       kube-scheduler-no-preload-950460             kube-system
	335d2285b48ba       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           57 seconds ago      Running             etcd                        1                   2d1772c838bf1       etcd-no-preload-950460                       kube-system
	a6b593db539dd       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                           57 seconds ago      Running             kube-apiserver              1                   fdd06bdaac050       kube-apiserver-no-preload-950460             kube-system
	
	
	==> describe nodes <==
	Name:               no-preload-950460
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-950460
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba
	                    minikube.k8s.io/name=no-preload-950460
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_28T06_55_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Dec 2025 06:55:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-950460
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 28 Dec 2025 06:57:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 28 Dec 2025 06:56:50 +0000   Sun, 28 Dec 2025 06:55:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 28 Dec 2025 06:56:50 +0000   Sun, 28 Dec 2025 06:55:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 28 Dec 2025 06:56:50 +0000   Sun, 28 Dec 2025 06:55:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 28 Dec 2025 06:56:50 +0000   Sun, 28 Dec 2025 06:56:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-950460
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 493159aea3d8b8768b108b926950835d
	  System UUID:                89ca7428-7fe3-48bf-8e6c-c80da5b6d3a1
	  Boot ID:                    e7a1d175-ccf2-4135-b9c7-3a9f70f4c4af
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-7d764666f9-npk6g                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-no-preload-950460                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-xhb7x                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-no-preload-950460              250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-no-preload-950460     200m (2%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-294rn                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-no-preload-950460              100m (1%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-jczrv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-52cwp          0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  108s  node-controller  Node no-preload-950460 event: Registered Node no-preload-950460 in Controller
	  Normal  RegisteredNode  53s   node-controller  Node no-preload-950460 event: Registered Node no-preload-950460 in Controller
	
	
	==> dmesg <==
	[Dec28 06:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001811] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000998] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.386099] i8042: Warning: Keylock active
	[  +0.010472] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.485785] block sda: the capability attribute has been deprecated.
	[  +0.082391] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024584] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.071522] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 06:57:15 up 39 min,  0 user,  load average: 5.11, 3.16, 1.92
	Linux no-preload-950460 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 28 06:56:34 no-preload-950460 kubelet[726]: E1228 06:56:34.053787     726 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-950460" containerName="kube-apiserver"
	Dec 28 06:56:35 no-preload-950460 kubelet[726]: E1228 06:56:35.815700     726 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-950460" containerName="kube-controller-manager"
	Dec 28 06:56:36 no-preload-950460 kubelet[726]: E1228 06:56:36.383403     726 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-950460" containerName="etcd"
	Dec 28 06:56:36 no-preload-950460 kubelet[726]: E1228 06:56:36.987280     726 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-jczrv" containerName="dashboard-metrics-scraper"
	Dec 28 06:56:36 no-preload-950460 kubelet[726]: I1228 06:56:36.987321     726 scope.go:122] "RemoveContainer" containerID="d7481796a1e2bb45817a2a841b553bd40cd21b1ca6660d824928d49285266ab0"
	Dec 28 06:56:37 no-preload-950460 kubelet[726]: I1228 06:56:37.063107     726 scope.go:122] "RemoveContainer" containerID="d7481796a1e2bb45817a2a841b553bd40cd21b1ca6660d824928d49285266ab0"
	Dec 28 06:56:37 no-preload-950460 kubelet[726]: E1228 06:56:37.063235     726 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-950460" containerName="etcd"
	Dec 28 06:56:37 no-preload-950460 kubelet[726]: E1228 06:56:37.063386     726 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-jczrv" containerName="dashboard-metrics-scraper"
	Dec 28 06:56:37 no-preload-950460 kubelet[726]: I1228 06:56:37.063411     726 scope.go:122] "RemoveContainer" containerID="d8f9dfcacb0f2e60fa831c073d13a7dbadd88f838736cbc32ec2b4a54d30e949"
	Dec 28 06:56:37 no-preload-950460 kubelet[726]: E1228 06:56:37.063562     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-jczrv_kubernetes-dashboard(a3a8f763-f065-44a8-8a5d-07cfd1073277)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-jczrv" podUID="a3a8f763-f065-44a8-8a5d-07cfd1073277"
	Dec 28 06:56:40 no-preload-950460 kubelet[726]: E1228 06:56:40.514234     726 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-jczrv" containerName="dashboard-metrics-scraper"
	Dec 28 06:56:40 no-preload-950460 kubelet[726]: I1228 06:56:40.514271     726 scope.go:122] "RemoveContainer" containerID="d8f9dfcacb0f2e60fa831c073d13a7dbadd88f838736cbc32ec2b4a54d30e949"
	Dec 28 06:56:40 no-preload-950460 kubelet[726]: E1228 06:56:40.514417     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-jczrv_kubernetes-dashboard(a3a8f763-f065-44a8-8a5d-07cfd1073277)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-jczrv" podUID="a3a8f763-f065-44a8-8a5d-07cfd1073277"
	Dec 28 06:56:51 no-preload-950460 kubelet[726]: I1228 06:56:51.099372     726 scope.go:122] "RemoveContainer" containerID="fd03d8dbcc76e4097ae1b7d2537ef7ada5f92d3166384ba71570161b37557929"
	Dec 28 06:56:55 no-preload-950460 kubelet[726]: E1228 06:56:55.678874     726 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-npk6g" containerName="coredns"
	Dec 28 06:57:01 no-preload-950460 kubelet[726]: E1228 06:57:01.987214     726 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-jczrv" containerName="dashboard-metrics-scraper"
	Dec 28 06:57:01 no-preload-950460 kubelet[726]: I1228 06:57:01.987252     726 scope.go:122] "RemoveContainer" containerID="d8f9dfcacb0f2e60fa831c073d13a7dbadd88f838736cbc32ec2b4a54d30e949"
	Dec 28 06:57:02 no-preload-950460 kubelet[726]: I1228 06:57:02.139116     726 scope.go:122] "RemoveContainer" containerID="d8f9dfcacb0f2e60fa831c073d13a7dbadd88f838736cbc32ec2b4a54d30e949"
	Dec 28 06:57:02 no-preload-950460 kubelet[726]: E1228 06:57:02.139482     726 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-jczrv" containerName="dashboard-metrics-scraper"
	Dec 28 06:57:02 no-preload-950460 kubelet[726]: I1228 06:57:02.139510     726 scope.go:122] "RemoveContainer" containerID="8ad3c88b5e19f6fe3d04e764d3c8bba33be52d561ef13d761b7665d7aa2eb1e5"
	Dec 28 06:57:02 no-preload-950460 kubelet[726]: E1228 06:57:02.139690     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-jczrv_kubernetes-dashboard(a3a8f763-f065-44a8-8a5d-07cfd1073277)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-jczrv" podUID="a3a8f763-f065-44a8-8a5d-07cfd1073277"
	Dec 28 06:57:09 no-preload-950460 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 28 06:57:10 no-preload-950460 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 28 06:57:10 no-preload-950460 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 06:57:10 no-preload-950460 systemd[1]: kubelet.service: Consumed 1.696s CPU time.
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1228 06:57:14.374952  269480 logs.go:279] Failed to list containers for "kube-apiserver": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:14Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:14.441985  269480 logs.go:279] Failed to list containers for "etcd": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:14Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:14.507380  269480 logs.go:279] Failed to list containers for "coredns": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:14Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:14.576134  269480 logs.go:279] Failed to list containers for "kube-scheduler": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:14Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:14.649670  269480 logs.go:279] Failed to list containers for "kube-proxy": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:14Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:14.726360  269480 logs.go:279] Failed to list containers for "kube-controller-manager": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:14Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:14.806588  269480 logs.go:279] Failed to list containers for "kindnet": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:14Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:14.880257  269480 logs.go:279] Failed to list containers for "storage-provisioner": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:14Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:14.951120  269480 logs.go:279] Failed to list containers for "kubernetes-dashboard": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:14Z" level=error msg="open /run/runc: no such file or directory"

                                                
                                                
** /stderr **
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-950460 -n no-preload-950460
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-950460 -n no-preload-950460: exit status 2 (359.862073ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-950460 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-479871 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-479871 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (336.700127ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:13Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-479871 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-479871
helpers_test.go:244: (dbg) docker inspect newest-cni-479871:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c33fbf6c53872baefd350c2b3a39d059949b68fa85b9a30cb0befeb93666d2b8",
	        "Created": "2025-12-28T06:56:54.089539242Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 262451,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-28T06:56:54.127824235Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8b8cccb9afb2a57c3d011fcf33e0403b1551aa7036e30b12a395646869801935",
	        "ResolvConfPath": "/var/lib/docker/containers/c33fbf6c53872baefd350c2b3a39d059949b68fa85b9a30cb0befeb93666d2b8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c33fbf6c53872baefd350c2b3a39d059949b68fa85b9a30cb0befeb93666d2b8/hostname",
	        "HostsPath": "/var/lib/docker/containers/c33fbf6c53872baefd350c2b3a39d059949b68fa85b9a30cb0befeb93666d2b8/hosts",
	        "LogPath": "/var/lib/docker/containers/c33fbf6c53872baefd350c2b3a39d059949b68fa85b9a30cb0befeb93666d2b8/c33fbf6c53872baefd350c2b3a39d059949b68fa85b9a30cb0befeb93666d2b8-json.log",
	        "Name": "/newest-cni-479871",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-479871:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-479871",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c33fbf6c53872baefd350c2b3a39d059949b68fa85b9a30cb0befeb93666d2b8",
	                "LowerDir": "/var/lib/docker/overlay2/521077bfa31a3e28dde97a8d39e5454f7bedd3f36c3b2f69239bf54eb94597b0-init/diff:/var/lib/docker/overlay2/69e554713d6cc3cb33e7ea5f93430536a8ca0db38320574d3719c26f00b2f62c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/521077bfa31a3e28dde97a8d39e5454f7bedd3f36c3b2f69239bf54eb94597b0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/521077bfa31a3e28dde97a8d39e5454f7bedd3f36c3b2f69239bf54eb94597b0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/521077bfa31a3e28dde97a8d39e5454f7bedd3f36c3b2f69239bf54eb94597b0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-479871",
	                "Source": "/var/lib/docker/volumes/newest-cni-479871/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-479871",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-479871",
	                "name.minikube.sigs.k8s.io": "newest-cni-479871",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "ea93018cf1ba7ab79ee46064bb0701d777a85b3c444b7d97b422b27cb65ab44f",
	            "SandboxKey": "/var/run/docker/netns/ea93018cf1ba",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-479871": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9e93d5a2f53a18661bc1ad8ca49ab2b022bc0bcfa1a555873e6d7e016530b0cb",
	                    "EndpointID": "dfcc8113cfddd049687a791f457ba3f18dd6fefb9a453472cdb262244fb78116",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "82:a4:9c:27:4f:2d",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-479871",
	                        "c33fbf6c5387"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-479871 -n newest-cni-479871
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-479871 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-479871 logs -n 25: (1.113940861s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p test-preload-785573                                                                                                                                                                                                                        │ test-preload-785573          │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ start   │ -p embed-certs-422591 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:56 UTC │
	│ delete  │ -p stopped-upgrade-416029                                                                                                                                                                                                                     │ stopped-upgrade-416029       │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ delete  │ -p disable-driver-mounts-719168                                                                                                                                                                                                               │ disable-driver-mounts-719168 │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:55 UTC │
	│ start   │ -p default-k8s-diff-port-500581 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:56 UTC │
	│ addons  │ enable metrics-server -p no-preload-950460 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │                     │
	│ stop    │ -p no-preload-950460 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:55 UTC │ 28 Dec 25 06:56 UTC │
	│ addons  │ enable dashboard -p no-preload-950460 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ start   │ -p no-preload-950460 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ addons  │ enable metrics-server -p embed-certs-422591 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	│ stop    │ -p embed-certs-422591 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-500581 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-500581 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ image   │ old-k8s-version-694122 image list --format=json                                                                                                                                                                                               │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ pause   │ -p old-k8s-version-694122 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	│ delete  │ -p old-k8s-version-694122                                                                                                                                                                                                                     │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ addons  │ enable dashboard -p embed-certs-422591 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ start   │ -p embed-certs-422591 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	│ delete  │ -p old-k8s-version-694122                                                                                                                                                                                                                     │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ start   │ -p newest-cni-479871 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:57 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-500581 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ start   │ -p default-k8s-diff-port-500581 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	│ image   │ no-preload-950460 image list --format=json                                                                                                                                                                                                    │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ pause   │ -p no-preload-950460 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-479871 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 06:56:51
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 06:56:51.304822  261568 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:56:51.304949  261568 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:56:51.304962  261568 out.go:374] Setting ErrFile to fd 2...
	I1228 06:56:51.304969  261568 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:56:51.305236  261568 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:56:51.305658  261568 out.go:368] Setting JSON to false
	I1228 06:56:51.306949  261568 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2363,"bootTime":1766902648,"procs":474,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1228 06:56:51.306998  261568 start.go:143] virtualization: kvm guest
	I1228 06:56:51.312562  261568 out.go:179] * [default-k8s-diff-port-500581] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1228 06:56:51.313893  261568 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 06:56:51.313933  261568 notify.go:221] Checking for updates...
	I1228 06:56:51.316760  261568 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 06:56:51.318014  261568 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:56:51.322529  261568 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-5550/.minikube
	I1228 06:56:51.323905  261568 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1228 06:56:51.325197  261568 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 06:56:51.326905  261568 config.go:182] Loaded profile config "default-k8s-diff-port-500581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:56:51.327673  261568 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 06:56:51.352695  261568 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1228 06:56:51.352843  261568 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:56:51.414000  261568 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:83 SystemTime:2025-12-28 06:56:51.40353353 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:56:51.414142  261568 docker.go:319] overlay module found
	I1228 06:56:51.418800  261568 out.go:179] * Using the docker driver based on existing profile
	I1228 06:56:51.419979  261568 start.go:309] selected driver: docker
	I1228 06:56:51.419992  261568 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-500581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-500581 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:56:51.420098  261568 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 06:56:51.420695  261568 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:56:51.478184  261568 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:83 SystemTime:2025-12-28 06:56:51.468547864 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:56:51.478493  261568 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 06:56:51.478528  261568 cni.go:84] Creating CNI manager for ""
	I1228 06:56:51.478601  261568 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1228 06:56:51.478656  261568 start.go:353] cluster config:
	{Name:default-k8s-diff-port-500581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-500581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:56:51.480689  261568 out.go:179] * Starting "default-k8s-diff-port-500581" primary control-plane node in "default-k8s-diff-port-500581" cluster
	I1228 06:56:51.482007  261568 cache.go:134] Beginning downloading kic base image for docker with crio
	I1228 06:56:51.483353  261568 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 06:56:51.484469  261568 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1228 06:56:51.484517  261568 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22352-5550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1228 06:56:51.484526  261568 cache.go:65] Caching tarball of preloaded images
	I1228 06:56:51.484594  261568 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 06:56:51.484617  261568 preload.go:251] Found /home/jenkins/minikube-integration/22352-5550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1228 06:56:51.484731  261568 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1228 06:56:51.484886  261568 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/default-k8s-diff-port-500581/config.json ...
	I1228 06:56:51.507639  261568 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 06:56:51.507662  261568 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
	I1228 06:56:51.507678  261568 cache.go:243] Successfully downloaded all kic artifacts
	I1228 06:56:51.507716  261568 start.go:360] acquireMachinesLock for default-k8s-diff-port-500581: {Name:mk09ab6a942c8bf16d457c533e6be9200b317247 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:56:51.507793  261568 start.go:364] duration metric: took 42.618µs to acquireMachinesLock for "default-k8s-diff-port-500581"
	I1228 06:56:51.507811  261568 start.go:96] Skipping create...Using existing machine configuration
	I1228 06:56:51.507818  261568 fix.go:54] fixHost starting: 
	I1228 06:56:51.508017  261568 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-500581 --format={{.State.Status}}
	I1228 06:56:51.526407  261568 fix.go:112] recreateIfNeeded on default-k8s-diff-port-500581: state=Stopped err=<nil>
	W1228 06:56:51.526437  261568 fix.go:138] unexpected machine state, will restart: <nil>
	I1228 06:56:49.299782  260283 out.go:252] * Restarting existing docker container for "embed-certs-422591" ...
	I1228 06:56:49.299856  260283 cli_runner.go:164] Run: docker start embed-certs-422591
	I1228 06:56:50.029376  260283 cli_runner.go:164] Run: docker container inspect embed-certs-422591 --format={{.State.Status}}
	I1228 06:56:50.048972  260283 kic.go:430] container "embed-certs-422591" state is running.
	I1228 06:56:50.049416  260283 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-422591
	I1228 06:56:50.070752  260283 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/embed-certs-422591/config.json ...
	I1228 06:56:50.070988  260283 machine.go:94] provisionDockerMachine start ...
	I1228 06:56:50.071086  260283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422591
	I1228 06:56:50.094281  260283 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:50.094592  260283 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1228 06:56:50.094614  260283 main.go:144] libmachine: About to run SSH command:
	hostname
	I1228 06:56:50.095430  260283 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:32768->127.0.0.1:33083: read: connection reset by peer
	I1228 06:56:53.224998  260283 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-422591
	
	I1228 06:56:53.225041  260283 ubuntu.go:182] provisioning hostname "embed-certs-422591"
	I1228 06:56:53.225100  260283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422591
	I1228 06:56:53.244551  260283 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:53.244828  260283 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1228 06:56:53.244846  260283 main.go:144] libmachine: About to run SSH command:
	sudo hostname embed-certs-422591 && echo "embed-certs-422591" | sudo tee /etc/hostname
	I1228 06:56:53.389453  260283 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-422591
	
	I1228 06:56:53.389539  260283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422591
	I1228 06:56:53.409408  260283 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:53.409692  260283 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1228 06:56:53.409717  260283 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-422591' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-422591/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-422591' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1228 06:56:53.535649  260283 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1228 06:56:53.535685  260283 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22352-5550/.minikube CaCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22352-5550/.minikube}
	I1228 06:56:53.535733  260283 ubuntu.go:190] setting up certificates
	I1228 06:56:53.535752  260283 provision.go:84] configureAuth start
	I1228 06:56:53.535838  260283 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-422591
	I1228 06:56:53.554332  260283 provision.go:143] copyHostCerts
	I1228 06:56:53.554402  260283 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem, removing ...
	I1228 06:56:53.554423  260283 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem
	I1228 06:56:53.554514  260283 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem (1082 bytes)
	I1228 06:56:53.554657  260283 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem, removing ...
	I1228 06:56:53.554671  260283 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem
	I1228 06:56:53.554718  260283 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem (1123 bytes)
	I1228 06:56:53.554817  260283 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem, removing ...
	I1228 06:56:53.554834  260283 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem
	I1228 06:56:53.554898  260283 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem (1679 bytes)
	I1228 06:56:53.554996  260283 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem org=jenkins.embed-certs-422591 san=[127.0.0.1 192.168.76.2 embed-certs-422591 localhost minikube]
	I1228 06:56:53.616863  260283 provision.go:177] copyRemoteCerts
	I1228 06:56:53.616949  260283 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1228 06:56:53.616995  260283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422591
	I1228 06:56:53.635721  260283 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/embed-certs-422591/id_rsa Username:docker}
	I1228 06:56:53.727300  260283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1228 06:56:53.745199  260283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1228 06:56:53.763059  260283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1228 06:56:53.779536  260283 provision.go:87] duration metric: took 243.761087ms to configureAuth
	I1228 06:56:53.779563  260283 ubuntu.go:206] setting minikube options for container-runtime
	I1228 06:56:53.779720  260283 config.go:182] Loaded profile config "embed-certs-422591": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:56:53.779833  260283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422591
	I1228 06:56:53.797684  260283 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:53.797962  260283 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1228 06:56:53.797993  260283 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1228 06:56:51.187721  252331 pod_ready.go:104] pod "coredns-7d764666f9-npk6g" is not "Ready", error: <nil>
	W1228 06:56:53.686879  252331 pod_ready.go:104] pod "coredns-7d764666f9-npk6g" is not "Ready", error: <nil>
	I1228 06:56:50.480049  260915 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1228 06:56:50.480300  260915 start.go:159] libmachine.API.Create for "newest-cni-479871" (driver="docker")
	I1228 06:56:50.480357  260915 client.go:173] LocalClient.Create starting
	I1228 06:56:50.480438  260915 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem
	I1228 06:56:50.480482  260915 main.go:144] libmachine: Decoding PEM data...
	I1228 06:56:50.480504  260915 main.go:144] libmachine: Parsing certificate...
	I1228 06:56:50.480573  260915 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem
	I1228 06:56:50.480601  260915 main.go:144] libmachine: Decoding PEM data...
	I1228 06:56:50.480625  260915 main.go:144] libmachine: Parsing certificate...
	I1228 06:56:50.481050  260915 cli_runner.go:164] Run: docker network inspect newest-cni-479871 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1228 06:56:50.497636  260915 cli_runner.go:211] docker network inspect newest-cni-479871 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1228 06:56:50.497706  260915 network_create.go:284] running [docker network inspect newest-cni-479871] to gather additional debugging logs...
	I1228 06:56:50.497723  260915 cli_runner.go:164] Run: docker network inspect newest-cni-479871
	W1228 06:56:50.516872  260915 cli_runner.go:211] docker network inspect newest-cni-479871 returned with exit code 1
	I1228 06:56:50.516901  260915 network_create.go:287] error running [docker network inspect newest-cni-479871]: docker network inspect newest-cni-479871: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-479871 not found
	I1228 06:56:50.516925  260915 network_create.go:289] output of [docker network inspect newest-cni-479871]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-479871 not found
	
	** /stderr **
	I1228 06:56:50.517047  260915 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 06:56:50.535337  260915 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-83d3c063481b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ca:56:51:df:60:88} reservation:<nil>}
	I1228 06:56:50.536022  260915 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-94477def059b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:5a:82:84:46:ba:6c} reservation:<nil>}
	I1228 06:56:50.536725  260915 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-76f4b09d664b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9a:e7:39:af:62:68} reservation:<nil>}
	I1228 06:56:50.537233  260915 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4435fbd1d5af IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:56:c5:3b:23:f3:bc} reservation:<nil>}
	I1228 06:56:50.538018  260915 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ed5f10}
	I1228 06:56:50.538069  260915 network_create.go:124] attempt to create docker network newest-cni-479871 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1228 06:56:50.538139  260915 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-479871 newest-cni-479871
	I1228 06:56:50.590599  260915 network_create.go:108] docker network newest-cni-479871 192.168.85.0/24 created
	I1228 06:56:50.590626  260915 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-479871" container
	I1228 06:56:50.590684  260915 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1228 06:56:50.612756  260915 cli_runner.go:164] Run: docker volume create newest-cni-479871 --label name.minikube.sigs.k8s.io=newest-cni-479871 --label created_by.minikube.sigs.k8s.io=true
	I1228 06:56:50.632558  260915 oci.go:103] Successfully created a docker volume newest-cni-479871
	I1228 06:56:50.632647  260915 cli_runner.go:164] Run: docker run --rm --name newest-cni-479871-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-479871 --entrypoint /usr/bin/test -v newest-cni-479871:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -d /var/lib
	I1228 06:56:51.057547  260915 oci.go:107] Successfully prepared a docker volume newest-cni-479871
	I1228 06:56:51.057623  260915 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1228 06:56:51.057634  260915 kic.go:194] Starting extracting preloaded images to volume ...
	I1228 06:56:51.057688  260915 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22352-5550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-479871:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1228 06:56:54.002932  260915 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22352-5550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-479871:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -I lz4 -xf /preloaded.tar -C /extractDir: (2.945200662s)
	I1228 06:56:54.002968  260915 kic.go:203] duration metric: took 2.94532948s to extract preloaded images to volume ...
	W1228 06:56:54.003085  260915 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1228 06:56:54.003131  260915 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1228 06:56:54.003194  260915 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1228 06:56:54.071814  260915 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-479871 --name newest-cni-479871 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-479871 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-479871 --network newest-cni-479871 --ip 192.168.85.2 --volume newest-cni-479871:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1
	I1228 06:56:54.369279  260915 cli_runner.go:164] Run: docker container inspect newest-cni-479871 --format={{.State.Running}}
	I1228 06:56:54.388635  260915 cli_runner.go:164] Run: docker container inspect newest-cni-479871 --format={{.State.Status}}
	I1228 06:56:54.408312  260915 cli_runner.go:164] Run: docker exec newest-cni-479871 stat /var/lib/dpkg/alternatives/iptables
	I1228 06:56:54.458080  260915 oci.go:144] the created container "newest-cni-479871" has a running status.
	I1228 06:56:54.458112  260915 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22352-5550/.minikube/machines/newest-cni-479871/id_rsa...
	I1228 06:56:54.551688  260915 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22352-5550/.minikube/machines/newest-cni-479871/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1228 06:56:54.583285  260915 cli_runner.go:164] Run: docker container inspect newest-cni-479871 --format={{.State.Status}}
	I1228 06:56:54.607350  260915 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1228 06:56:54.607368  260915 kic_runner.go:114] Args: [docker exec --privileged newest-cni-479871 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1228 06:56:54.652142  260915 cli_runner.go:164] Run: docker container inspect newest-cni-479871 --format={{.State.Status}}
	I1228 06:56:54.681007  260915 machine.go:94] provisionDockerMachine start ...
	I1228 06:56:54.681235  260915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:56:54.705265  260915 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:54.705490  260915 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1228 06:56:54.705498  260915 main.go:144] libmachine: About to run SSH command:
	hostname
	I1228 06:56:54.841048  260915 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-479871
	
	I1228 06:56:54.841091  260915 ubuntu.go:182] provisioning hostname "newest-cni-479871"
	I1228 06:56:54.841152  260915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:56:54.860627  260915 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:54.860944  260915 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1228 06:56:54.860965  260915 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-479871 && echo "newest-cni-479871" | sudo tee /etc/hostname
	I1228 06:56:55.000794  260915 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-479871
	
	I1228 06:56:55.000873  260915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:56:55.023082  260915 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:55.023416  260915 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1228 06:56:55.023451  260915 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-479871' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-479871/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-479871' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1228 06:56:55.155462  260915 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1228 06:56:55.155487  260915 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22352-5550/.minikube CaCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22352-5550/.minikube}
	I1228 06:56:55.155505  260915 ubuntu.go:190] setting up certificates
	I1228 06:56:55.155516  260915 provision.go:84] configureAuth start
	I1228 06:56:55.155581  260915 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-479871
	I1228 06:56:55.175395  260915 provision.go:143] copyHostCerts
	I1228 06:56:55.175450  260915 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem, removing ...
	I1228 06:56:55.175460  260915 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem
	I1228 06:56:55.175531  260915 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem (1123 bytes)
	I1228 06:56:55.175657  260915 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem, removing ...
	I1228 06:56:55.175670  260915 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem
	I1228 06:56:55.175711  260915 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem (1679 bytes)
	I1228 06:56:55.175807  260915 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem, removing ...
	I1228 06:56:55.175819  260915 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem
	I1228 06:56:55.175860  260915 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem (1082 bytes)
	I1228 06:56:55.175997  260915 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem org=jenkins.newest-cni-479871 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-479871]
	I1228 06:56:55.234134  260915 provision.go:177] copyRemoteCerts
	I1228 06:56:55.234200  260915 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1228 06:56:55.234257  260915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:56:55.253397  260915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/newest-cni-479871/id_rsa Username:docker}
	I1228 06:56:54.168584  260283 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1228 06:56:54.168615  260283 machine.go:97] duration metric: took 4.09761028s to provisionDockerMachine
	I1228 06:56:54.168631  260283 start.go:293] postStartSetup for "embed-certs-422591" (driver="docker")
	I1228 06:56:54.168660  260283 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1228 06:56:54.168725  260283 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1228 06:56:54.168787  260283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422591
	I1228 06:56:54.192016  260283 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/embed-certs-422591/id_rsa Username:docker}
	I1228 06:56:54.304369  260283 ssh_runner.go:195] Run: cat /etc/os-release
	I1228 06:56:54.308295  260283 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1228 06:56:54.308330  260283 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1228 06:56:54.308342  260283 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-5550/.minikube/addons for local assets ...
	I1228 06:56:54.308408  260283 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-5550/.minikube/files for local assets ...
	I1228 06:56:54.308518  260283 filesync.go:149] local asset: /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem -> 90762.pem in /etc/ssl/certs
	I1228 06:56:54.308669  260283 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1228 06:56:54.316305  260283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem --> /etc/ssl/certs/90762.pem (1708 bytes)
	I1228 06:56:54.333546  260283 start.go:296] duration metric: took 164.900492ms for postStartSetup
	I1228 06:56:54.333638  260283 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 06:56:54.333685  260283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422591
	I1228 06:56:54.354220  260283 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/embed-certs-422591/id_rsa Username:docker}
	I1228 06:56:54.444937  260283 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1228 06:56:54.451873  260283 fix.go:56] duration metric: took 5.175283325s for fixHost
	I1228 06:56:54.451930  260283 start.go:83] releasing machines lock for "embed-certs-422591", held for 5.17534762s
	I1228 06:56:54.452000  260283 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-422591
	I1228 06:56:54.471600  260283 ssh_runner.go:195] Run: cat /version.json
	I1228 06:56:54.471642  260283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422591
	I1228 06:56:54.471728  260283 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1228 06:56:54.471811  260283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422591
	I1228 06:56:54.492447  260283 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/embed-certs-422591/id_rsa Username:docker}
	I1228 06:56:54.492692  260283 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/embed-certs-422591/id_rsa Username:docker}
	I1228 06:56:54.656519  260283 ssh_runner.go:195] Run: systemctl --version
	I1228 06:56:54.666648  260283 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1228 06:56:54.712845  260283 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1228 06:56:54.719909  260283 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1228 06:56:54.719980  260283 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1228 06:56:54.729922  260283 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1228 06:56:54.729983  260283 start.go:496] detecting cgroup driver to use...
	I1228 06:56:54.730019  260283 detect.go:190] detected "systemd" cgroup driver on host os
	I1228 06:56:54.730084  260283 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1228 06:56:54.745512  260283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1228 06:56:54.760533  260283 docker.go:218] disabling cri-docker service (if available) ...
	I1228 06:56:54.760588  260283 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1228 06:56:54.776631  260283 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1228 06:56:54.789719  260283 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1228 06:56:54.887189  260283 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1228 06:56:54.981826  260283 docker.go:234] disabling docker service ...
	I1228 06:56:54.981900  260283 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1228 06:56:55.001365  260283 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1228 06:56:55.016902  260283 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1228 06:56:55.113674  260283 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1228 06:56:55.201172  260283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1228 06:56:55.213948  260283 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 06:56:55.229743  260283 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1228 06:56:55.229795  260283 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:55.238954  260283 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1228 06:56:55.239021  260283 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:55.248040  260283 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:55.257595  260283 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:55.266670  260283 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1228 06:56:55.275055  260283 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:55.284080  260283 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:55.292518  260283 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:55.301093  260283 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1228 06:56:55.308817  260283 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1228 06:56:55.316372  260283 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:56:55.403600  260283 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1228 06:56:55.536797  260283 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1228 06:56:55.536860  260283 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1228 06:56:55.541349  260283 start.go:574] Will wait 60s for crictl version
	I1228 06:56:55.541437  260283 ssh_runner.go:195] Run: which crictl
	I1228 06:56:55.544932  260283 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1228 06:56:55.573996  260283 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1228 06:56:55.574084  260283 ssh_runner.go:195] Run: crio --version
	I1228 06:56:55.603216  260283 ssh_runner.go:195] Run: crio --version
	I1228 06:56:55.635699  260283 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1228 06:56:51.528193  261568 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-500581" ...
	I1228 06:56:51.528256  261568 cli_runner.go:164] Run: docker start default-k8s-diff-port-500581
	I1228 06:56:51.794281  261568 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-500581 --format={{.State.Status}}
	I1228 06:56:51.813604  261568 kic.go:430] container "default-k8s-diff-port-500581" state is running.
	I1228 06:56:51.813999  261568 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-500581
	I1228 06:56:51.836391  261568 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/default-k8s-diff-port-500581/config.json ...
	I1228 06:56:51.836675  261568 machine.go:94] provisionDockerMachine start ...
	I1228 06:56:51.836769  261568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-500581
	I1228 06:56:51.856837  261568 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:51.857168  261568 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1228 06:56:51.857185  261568 main.go:144] libmachine: About to run SSH command:
	hostname
	I1228 06:56:51.857850  261568 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56468->127.0.0.1:33088: read: connection reset by peer
	I1228 06:56:54.989220  261568 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-500581
	
	I1228 06:56:54.989252  261568 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-500581"
	I1228 06:56:54.989314  261568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-500581
	I1228 06:56:55.011189  261568 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:55.011424  261568 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1228 06:56:55.011443  261568 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-500581 && echo "default-k8s-diff-port-500581" | sudo tee /etc/hostname
	I1228 06:56:55.160703  261568 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-500581
	
	I1228 06:56:55.160788  261568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-500581
	I1228 06:56:55.180898  261568 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:55.181227  261568 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1228 06:56:55.181257  261568 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-500581' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-500581/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-500581' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1228 06:56:55.307110  261568 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1228 06:56:55.307133  261568 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22352-5550/.minikube CaCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22352-5550/.minikube}
	I1228 06:56:55.307155  261568 ubuntu.go:190] setting up certificates
	I1228 06:56:55.307172  261568 provision.go:84] configureAuth start
	I1228 06:56:55.307219  261568 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-500581
	I1228 06:56:55.326689  261568 provision.go:143] copyHostCerts
	I1228 06:56:55.326750  261568 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem, removing ...
	I1228 06:56:55.326761  261568 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem
	I1228 06:56:55.326811  261568 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem (1123 bytes)
	I1228 06:56:55.326966  261568 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem, removing ...
	I1228 06:56:55.326979  261568 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem
	I1228 06:56:55.327002  261568 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem (1679 bytes)
	I1228 06:56:55.327100  261568 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem, removing ...
	I1228 06:56:55.327110  261568 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem
	I1228 06:56:55.327132  261568 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem (1082 bytes)
	I1228 06:56:55.327202  261568 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-500581 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-500581 localhost minikube]
	I1228 06:56:55.373177  261568 provision.go:177] copyRemoteCerts
	I1228 06:56:55.373236  261568 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1228 06:56:55.373295  261568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-500581
	I1228 06:56:55.392900  261568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/default-k8s-diff-port-500581/id_rsa Username:docker}
	I1228 06:56:55.486399  261568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1228 06:56:55.505187  261568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1228 06:56:55.522853  261568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1228 06:56:55.540417  261568 provision.go:87] duration metric: took 233.223896ms to configureAuth
	I1228 06:56:55.540444  261568 ubuntu.go:206] setting minikube options for container-runtime
	I1228 06:56:55.540674  261568 config.go:182] Loaded profile config "default-k8s-diff-port-500581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:56:55.540784  261568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-500581
	I1228 06:56:55.560885  261568 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:55.561205  261568 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1228 06:56:55.561248  261568 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1228 06:56:55.912261  261568 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1228 06:56:55.912292  261568 machine.go:97] duration metric: took 4.075596904s to provisionDockerMachine
	I1228 06:56:55.912309  261568 start.go:293] postStartSetup for "default-k8s-diff-port-500581" (driver="docker")
	I1228 06:56:55.912323  261568 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1228 06:56:55.912405  261568 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1228 06:56:55.912473  261568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-500581
	I1228 06:56:55.934789  261568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/default-k8s-diff-port-500581/id_rsa Username:docker}
	I1228 06:56:56.028978  261568 ssh_runner.go:195] Run: cat /etc/os-release
	I1228 06:56:56.033725  261568 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1228 06:56:56.033788  261568 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1228 06:56:56.033803  261568 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-5550/.minikube/addons for local assets ...
	I1228 06:56:56.033860  261568 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-5550/.minikube/files for local assets ...
	I1228 06:56:56.033970  261568 filesync.go:149] local asset: /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem -> 90762.pem in /etc/ssl/certs
	I1228 06:56:56.034118  261568 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1228 06:56:56.043909  261568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem --> /etc/ssl/certs/90762.pem (1708 bytes)
	I1228 06:56:56.068426  261568 start.go:296] duration metric: took 156.102069ms for postStartSetup
	I1228 06:56:56.068509  261568 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 06:56:56.068568  261568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-500581
	I1228 06:56:56.094504  261568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/default-k8s-diff-port-500581/id_rsa Username:docker}
	I1228 06:56:56.186274  261568 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1228 06:56:56.192245  261568 fix.go:56] duration metric: took 4.684422638s for fixHost
	I1228 06:56:56.192269  261568 start.go:83] releasing machines lock for "default-k8s-diff-port-500581", held for 4.684465564s
	I1228 06:56:56.192339  261568 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-500581
	I1228 06:56:56.215984  261568 ssh_runner.go:195] Run: cat /version.json
	I1228 06:56:56.216056  261568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-500581
	I1228 06:56:56.216085  261568 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1228 06:56:56.216168  261568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-500581
	I1228 06:56:56.236830  261568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/default-k8s-diff-port-500581/id_rsa Username:docker}
	I1228 06:56:56.237219  261568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/default-k8s-diff-port-500581/id_rsa Username:docker}
	I1228 06:56:55.636809  260283 cli_runner.go:164] Run: docker network inspect embed-certs-422591 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 06:56:55.657292  260283 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1228 06:56:55.661351  260283 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 06:56:55.671982  260283 kubeadm.go:884] updating cluster {Name:embed-certs-422591 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-422591 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 06:56:55.672135  260283 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1228 06:56:55.672197  260283 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 06:56:55.717231  260283 crio.go:631] all images are preloaded for cri-o runtime.
	I1228 06:56:55.717252  260283 crio.go:503] Images already preloaded, skipping extraction
	I1228 06:56:55.717304  260283 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 06:56:55.750510  260283 crio.go:631] all images are preloaded for cri-o runtime.
	I1228 06:56:55.750537  260283 cache_images.go:86] Images are preloaded, skipping loading
	I1228 06:56:55.750545  260283 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I1228 06:56:55.750638  260283 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-422591 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:embed-certs-422591 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 06:56:55.750697  260283 ssh_runner.go:195] Run: crio config
	I1228 06:56:55.798757  260283 cni.go:84] Creating CNI manager for ""
	I1228 06:56:55.798781  260283 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1228 06:56:55.798794  260283 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1228 06:56:55.798816  260283 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-422591 NodeName:embed-certs-422591 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 06:56:55.798981  260283 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-422591"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 06:56:55.799071  260283 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1228 06:56:55.808067  260283 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 06:56:55.808139  260283 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 06:56:55.816236  260283 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1228 06:56:55.830081  260283 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 06:56:55.844082  260283 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1228 06:56:55.857168  260283 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1228 06:56:55.861349  260283 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 06:56:55.872967  260283 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:56:55.969484  260283 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:56:55.991172  260283 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/embed-certs-422591 for IP: 192.168.76.2
	I1228 06:56:55.991194  260283 certs.go:195] generating shared ca certs ...
	I1228 06:56:55.991213  260283 certs.go:227] acquiring lock for ca certs: {Name:mk77ee411d20e2d367f536371cb4debf1ce5f664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:55.991369  260283 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-5550/.minikube/ca.key
	I1228 06:56:55.991423  260283 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.key
	I1228 06:56:55.991435  260283 certs.go:257] generating profile certs ...
	I1228 06:56:55.991549  260283 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/embed-certs-422591/client.key
	I1228 06:56:55.991631  260283 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/embed-certs-422591/apiserver.key.3be22f86
	I1228 06:56:55.991682  260283 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/embed-certs-422591/proxy-client.key
	I1228 06:56:55.991823  260283 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076.pem (1338 bytes)
	W1228 06:56:55.991865  260283 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076_empty.pem, impossibly tiny 0 bytes
	I1228 06:56:55.991877  260283 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem (1675 bytes)
	I1228 06:56:55.991914  260283 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem (1082 bytes)
	I1228 06:56:55.991950  260283 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem (1123 bytes)
	I1228 06:56:55.991981  260283 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem (1679 bytes)
	I1228 06:56:55.992051  260283 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem (1708 bytes)
	I1228 06:56:55.992737  260283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 06:56:56.012567  260283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1228 06:56:56.034343  260283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 06:56:56.057165  260283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 06:56:56.079350  260283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/embed-certs-422591/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1228 06:56:56.103893  260283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/embed-certs-422591/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1228 06:56:56.123746  260283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/embed-certs-422591/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 06:56:56.141940  260283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/embed-certs-422591/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1228 06:56:56.160463  260283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem --> /usr/share/ca-certificates/90762.pem (1708 bytes)
	I1228 06:56:56.177728  260283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 06:56:56.199019  260283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076.pem --> /usr/share/ca-certificates/9076.pem (1338 bytes)
	I1228 06:56:56.220395  260283 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 06:56:56.235535  260283 ssh_runner.go:195] Run: openssl version
	I1228 06:56:56.242495  260283 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/90762.pem
	I1228 06:56:56.250951  260283 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/90762.pem /etc/ssl/certs/90762.pem
	I1228 06:56:56.260106  260283 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/90762.pem
	I1228 06:56:56.264522  260283 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:31 /usr/share/ca-certificates/90762.pem
	I1228 06:56:56.264582  260283 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/90762.pem
	I1228 06:56:56.302672  260283 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 06:56:56.310442  260283 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:56.318190  260283 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 06:56:56.326937  260283 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:56.330782  260283 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:28 /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:56.330838  260283 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:56.366947  260283 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 06:56:56.374588  260283 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9076.pem
	I1228 06:56:56.382855  260283 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9076.pem /etc/ssl/certs/9076.pem
	I1228 06:56:56.392178  260283 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9076.pem
	I1228 06:56:56.400669  260283 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:31 /usr/share/ca-certificates/9076.pem
	I1228 06:56:56.400781  260283 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9076.pem
	I1228 06:56:56.443361  260283 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 06:56:56.451380  260283 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 06:56:56.455260  260283 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1228 06:56:56.493195  260283 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1228 06:56:56.552322  260283 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1228 06:56:56.610967  260283 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1228 06:56:56.678082  260283 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1228 06:56:56.744904  260283 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1228 06:56:56.802976  260283 kubeadm.go:401] StartCluster: {Name:embed-certs-422591 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-422591 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:56:56.803131  260283 ssh_runner.go:195] Run: sudo crio config
	I1228 06:56:56.887317  260283 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	W1228 06:56:56.902690  260283 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:56Z" level=error msg="open /run/runc: no such file or directory"
	I1228 06:56:56.902780  260283 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 06:56:56.911889  260283 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1228 06:56:56.911919  260283 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1228 06:56:56.911966  260283 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1228 06:56:56.921385  260283 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1228 06:56:56.922175  260283 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-422591" does not appear in /home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:56:56.922628  260283 kubeconfig.go:62] /home/jenkins/minikube-integration/22352-5550/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-422591" cluster setting kubeconfig missing "embed-certs-422591" context setting]
	I1228 06:56:56.923248  260283 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/kubeconfig: {Name:mke0b45a06b272ee5aa493b62cdbe6f2a53c0aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:56.924994  260283 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1228 06:56:56.935152  260283 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1228 06:56:56.935190  260283 kubeadm.go:602] duration metric: took 23.263516ms to restartPrimaryControlPlane
	I1228 06:56:56.935207  260283 kubeadm.go:403] duration metric: took 132.238201ms to StartCluster
	I1228 06:56:56.935226  260283 settings.go:142] acquiring lock: {Name:mk84c1d4c127eaf11c7cc5cc16de86f86f2dcbe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:56.935306  260283 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:56:56.936685  260283 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/kubeconfig: {Name:mke0b45a06b272ee5aa493b62cdbe6f2a53c0aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:56.936960  260283 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1228 06:56:56.937200  260283 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1228 06:56:56.937287  260283 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-422591"
	I1228 06:56:56.937304  260283 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-422591"
	W1228 06:56:56.937311  260283 addons.go:248] addon storage-provisioner should already be in state true
	I1228 06:56:56.937316  260283 config.go:182] Loaded profile config "embed-certs-422591": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:56:56.937338  260283 host.go:66] Checking if "embed-certs-422591" exists ...
	I1228 06:56:56.937426  260283 addons.go:70] Setting default-storageclass=true in profile "embed-certs-422591"
	I1228 06:56:56.937441  260283 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-422591"
	I1228 06:56:56.937706  260283 cli_runner.go:164] Run: docker container inspect embed-certs-422591 --format={{.State.Status}}
	I1228 06:56:56.937808  260283 cli_runner.go:164] Run: docker container inspect embed-certs-422591 --format={{.State.Status}}
	I1228 06:56:56.937839  260283 addons.go:70] Setting dashboard=true in profile "embed-certs-422591"
	I1228 06:56:56.937859  260283 addons.go:239] Setting addon dashboard=true in "embed-certs-422591"
	W1228 06:56:56.937868  260283 addons.go:248] addon dashboard should already be in state true
	I1228 06:56:56.937892  260283 host.go:66] Checking if "embed-certs-422591" exists ...
	I1228 06:56:56.938390  260283 cli_runner.go:164] Run: docker container inspect embed-certs-422591 --format={{.State.Status}}
	I1228 06:56:56.939110  260283 out.go:179] * Verifying Kubernetes components...
	I1228 06:56:56.940441  260283 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:56:56.975612  260283 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1228 06:56:56.976794  260283 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:56:56.976818  260283 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1228 06:56:56.976876  260283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422591
	I1228 06:56:56.981192  260283 addons.go:239] Setting addon default-storageclass=true in "embed-certs-422591"
	W1228 06:56:56.981219  260283 addons.go:248] addon default-storageclass should already be in state true
	I1228 06:56:56.981245  260283 host.go:66] Checking if "embed-certs-422591" exists ...
	I1228 06:56:56.981694  260283 cli_runner.go:164] Run: docker container inspect embed-certs-422591 --format={{.State.Status}}
	I1228 06:56:56.982695  260283 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1228 06:56:56.984042  260283 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1228 06:56:55.347118  260915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1228 06:56:55.367880  260915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1228 06:56:55.385829  260915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1228 06:56:55.405574  260915 provision.go:87] duration metric: took 250.043655ms to configureAuth
	I1228 06:56:55.405599  260915 ubuntu.go:206] setting minikube options for container-runtime
	I1228 06:56:55.405793  260915 config.go:182] Loaded profile config "newest-cni-479871": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:56:55.405923  260915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:56:55.426557  260915 main.go:144] libmachine: Using SSH client type: native
	I1228 06:56:55.426761  260915 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1228 06:56:55.426777  260915 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1228 06:56:55.707096  260915 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1228 06:56:55.707127  260915 machine.go:97] duration metric: took 1.025985439s to provisionDockerMachine
	I1228 06:56:55.707141  260915 client.go:176] duration metric: took 5.226772639s to LocalClient.Create
	I1228 06:56:55.707163  260915 start.go:167] duration metric: took 5.226863018s to libmachine.API.Create "newest-cni-479871"
	I1228 06:56:55.707178  260915 start.go:293] postStartSetup for "newest-cni-479871" (driver="docker")
	I1228 06:56:55.707191  260915 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1228 06:56:55.707328  260915 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1228 06:56:55.707387  260915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:56:55.730590  260915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/newest-cni-479871/id_rsa Username:docker}
	I1228 06:56:55.828324  260915 ssh_runner.go:195] Run: cat /etc/os-release
	I1228 06:56:55.832265  260915 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1228 06:56:55.832288  260915 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1228 06:56:55.832299  260915 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-5550/.minikube/addons for local assets ...
	I1228 06:56:55.832350  260915 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-5550/.minikube/files for local assets ...
	I1228 06:56:55.832419  260915 filesync.go:149] local asset: /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem -> 90762.pem in /etc/ssl/certs
	I1228 06:56:55.832512  260915 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1228 06:56:55.839863  260915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem --> /etc/ssl/certs/90762.pem (1708 bytes)
	I1228 06:56:55.861613  260915 start.go:296] duration metric: took 154.42382ms for postStartSetup
	I1228 06:56:55.861983  260915 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-479871
	I1228 06:56:55.882165  260915 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/config.json ...
	I1228 06:56:55.882431  260915 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 06:56:55.882487  260915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:56:55.907110  260915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/newest-cni-479871/id_rsa Username:docker}
	I1228 06:56:56.002055  260915 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1228 06:56:56.007480  260915 start.go:128] duration metric: took 5.529512048s to createHost
	I1228 06:56:56.007505  260915 start.go:83] releasing machines lock for "newest-cni-479871", held for 5.529670542s
	I1228 06:56:56.007573  260915 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-479871
	I1228 06:56:56.029672  260915 ssh_runner.go:195] Run: cat /version.json
	I1228 06:56:56.029705  260915 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1228 06:56:56.029725  260915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:56:56.029776  260915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:56:56.055251  260915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/newest-cni-479871/id_rsa Username:docker}
	I1228 06:56:56.056879  260915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/newest-cni-479871/id_rsa Username:docker}
	I1228 06:56:56.223588  260915 ssh_runner.go:195] Run: systemctl --version
	I1228 06:56:56.231474  260915 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1228 06:56:56.270713  260915 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1228 06:56:56.275245  260915 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1228 06:56:56.275311  260915 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1228 06:56:56.303121  260915 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1228 06:56:56.303143  260915 start.go:496] detecting cgroup driver to use...
	I1228 06:56:56.303180  260915 detect.go:190] detected "systemd" cgroup driver on host os
	I1228 06:56:56.303231  260915 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1228 06:56:56.319367  260915 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1228 06:56:56.332383  260915 docker.go:218] disabling cri-docker service (if available) ...
	I1228 06:56:56.332437  260915 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1228 06:56:56.349611  260915 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1228 06:56:56.366740  260915 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1228 06:56:56.458933  260915 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1228 06:56:56.581970  260915 docker.go:234] disabling docker service ...
	I1228 06:56:56.582057  260915 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1228 06:56:56.611636  260915 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1228 06:56:56.629973  260915 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1228 06:56:56.778762  260915 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1228 06:56:56.898948  260915 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1228 06:56:56.915292  260915 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 06:56:56.936739  260915 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1228 06:56:56.936802  260915 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:56.957436  260915 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1228 06:56:56.957511  260915 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:56.970285  260915 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:56.991323  260915 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:57.012351  260915 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1228 06:56:57.030720  260915 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:57.044267  260915 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:57.063444  260915 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:57.076260  260915 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1228 06:56:57.086701  260915 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1228 06:56:57.094844  260915 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:56:57.197445  260915 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1228 06:56:57.376208  260915 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1228 06:56:57.376288  260915 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1228 06:56:57.381285  260915 start.go:574] Will wait 60s for crictl version
	I1228 06:56:57.381333  260915 ssh_runner.go:195] Run: which crictl
	I1228 06:56:57.386277  260915 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1228 06:56:57.416647  260915 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1228 06:56:57.416739  260915 ssh_runner.go:195] Run: crio --version
	I1228 06:56:57.451001  260915 ssh_runner.go:195] Run: crio --version
	I1228 06:56:57.487677  260915 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1228 06:56:57.488839  260915 cli_runner.go:164] Run: docker network inspect newest-cni-479871 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 06:56:57.510156  260915 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1228 06:56:57.515131  260915 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 06:56:57.529473  260915 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1228 06:56:56.380475  261568 ssh_runner.go:195] Run: systemctl --version
	I1228 06:56:56.388512  261568 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1228 06:56:56.432498  261568 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1228 06:56:56.437345  261568 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1228 06:56:56.437405  261568 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1228 06:56:56.445717  261568 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1228 06:56:56.445738  261568 start.go:496] detecting cgroup driver to use...
	I1228 06:56:56.445770  261568 detect.go:190] detected "systemd" cgroup driver on host os
	I1228 06:56:56.445818  261568 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1228 06:56:56.460887  261568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1228 06:56:56.472988  261568 docker.go:218] disabling cri-docker service (if available) ...
	I1228 06:56:56.473075  261568 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1228 06:56:56.488438  261568 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1228 06:56:56.505894  261568 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1228 06:56:56.621379  261568 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1228 06:56:56.764198  261568 docker.go:234] disabling docker service ...
	I1228 06:56:56.764262  261568 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1228 06:56:56.784627  261568 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1228 06:56:56.801487  261568 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1228 06:56:56.935018  261568 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1228 06:56:57.099832  261568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1228 06:56:57.114590  261568 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 06:56:57.138584  261568 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1228 06:56:57.138648  261568 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:57.149353  261568 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1228 06:56:57.149428  261568 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:57.160151  261568 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:57.171588  261568 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:57.182489  261568 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1228 06:56:57.193579  261568 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:57.206803  261568 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:57.219708  261568 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:56:57.230493  261568 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1228 06:56:57.241799  261568 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1228 06:56:57.254056  261568 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:56:57.353683  261568 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1228 06:56:57.510586  261568 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1228 06:56:57.510663  261568 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1228 06:56:57.515591  261568 start.go:574] Will wait 60s for crictl version
	I1228 06:56:57.515660  261568 ssh_runner.go:195] Run: which crictl
	I1228 06:56:57.520214  261568 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1228 06:56:57.552121  261568 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1228 06:56:57.552210  261568 ssh_runner.go:195] Run: crio --version
	I1228 06:56:57.588059  261568 ssh_runner.go:195] Run: crio --version
	I1228 06:56:57.633785  261568 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	W1228 06:56:55.687228  252331 pod_ready.go:104] pod "coredns-7d764666f9-npk6g" is not "Ready", error: <nil>
	I1228 06:56:56.187608  252331 pod_ready.go:94] pod "coredns-7d764666f9-npk6g" is "Ready"
	I1228 06:56:56.187639  252331 pod_ready.go:86] duration metric: took 35.50648982s for pod "coredns-7d764666f9-npk6g" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:56.190301  252331 pod_ready.go:83] waiting for pod "etcd-no-preload-950460" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:56.194625  252331 pod_ready.go:94] pod "etcd-no-preload-950460" is "Ready"
	I1228 06:56:56.194650  252331 pod_ready.go:86] duration metric: took 4.324521ms for pod "etcd-no-preload-950460" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:56.196770  252331 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-950460" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:56.200996  252331 pod_ready.go:94] pod "kube-apiserver-no-preload-950460" is "Ready"
	I1228 06:56:56.201021  252331 pod_ready.go:86] duration metric: took 4.22637ms for pod "kube-apiserver-no-preload-950460" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:56.203067  252331 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-950460" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:56.386984  252331 pod_ready.go:94] pod "kube-controller-manager-no-preload-950460" is "Ready"
	I1228 06:56:56.387016  252331 pod_ready.go:86] duration metric: took 183.928403ms for pod "kube-controller-manager-no-preload-950460" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:56.586132  252331 pod_ready.go:83] waiting for pod "kube-proxy-294rn" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:56.998562  252331 pod_ready.go:94] pod "kube-proxy-294rn" is "Ready"
	I1228 06:56:56.998589  252331 pod_ready.go:86] duration metric: took 412.431002ms for pod "kube-proxy-294rn" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:57.186108  252331 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-950460" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:57.585825  252331 pod_ready.go:94] pod "kube-scheduler-no-preload-950460" is "Ready"
	I1228 06:56:57.585854  252331 pod_ready.go:86] duration metric: took 399.717455ms for pod "kube-scheduler-no-preload-950460" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:56:57.585870  252331 pod_ready.go:40] duration metric: took 36.908067526s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 06:56:57.640725  252331 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1228 06:56:57.643532  252331 out.go:179] * Done! kubectl is now configured to use "no-preload-950460" cluster and "default" namespace by default
	I1228 06:56:56.986006  260283 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1228 06:56:56.986182  260283 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1228 06:56:56.986292  260283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422591
	I1228 06:56:57.021254  260283 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/embed-certs-422591/id_rsa Username:docker}
	I1228 06:56:57.026470  260283 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1228 06:56:57.026498  260283 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1228 06:56:57.026561  260283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422591
	I1228 06:56:57.032342  260283 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/embed-certs-422591/id_rsa Username:docker}
	I1228 06:56:57.052347  260283 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/embed-certs-422591/id_rsa Username:docker}
	I1228 06:56:57.114798  260283 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:56:57.136438  260283 node_ready.go:35] waiting up to 6m0s for node "embed-certs-422591" to be "Ready" ...
	I1228 06:56:57.143093  260283 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:56:57.146367  260283 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1228 06:56:57.146437  260283 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1228 06:56:57.159893  260283 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1228 06:56:57.162997  260283 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1228 06:56:57.163021  260283 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1228 06:56:57.178802  260283 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1228 06:56:57.178824  260283 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1228 06:56:57.195442  260283 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1228 06:56:57.195462  260283 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1228 06:56:57.215683  260283 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1228 06:56:57.215712  260283 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1228 06:56:57.234390  260283 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1228 06:56:57.234464  260283 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1228 06:56:57.250624  260283 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1228 06:56:57.250659  260283 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1228 06:56:57.269371  260283 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1228 06:56:57.269405  260283 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1228 06:56:57.287286  260283 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 06:56:57.287318  260283 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1228 06:56:57.303823  260283 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 06:56:58.341277  260283 node_ready.go:49] node "embed-certs-422591" is "Ready"
	I1228 06:56:58.341486  260283 node_ready.go:38] duration metric: took 1.204996046s for node "embed-certs-422591" to be "Ready" ...
	I1228 06:56:58.341543  260283 api_server.go:52] waiting for apiserver process to appear ...
	I1228 06:56:58.341625  260283 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 06:56:59.079200  260283 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.9360724s)
	I1228 06:56:59.079284  260283 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.919364076s)
	I1228 06:56:59.079836  260283 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.77597476s)
	I1228 06:56:59.079928  260283 api_server.go:72] duration metric: took 2.142935627s to wait for apiserver process to appear ...
	I1228 06:56:59.080185  260283 api_server.go:88] waiting for apiserver healthz status ...
	I1228 06:56:59.080283  260283 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1228 06:56:59.081622  260283 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-422591 addons enable metrics-server
	
	I1228 06:56:59.086704  260283 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1228 06:56:59.086730  260283 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1228 06:56:59.096150  260283 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1228 06:56:57.634878  261568 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-500581 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 06:56:57.656475  261568 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1228 06:56:57.662868  261568 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 06:56:57.680225  261568 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-500581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-500581 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 06:56:57.680387  261568 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1228 06:56:57.680441  261568 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 06:56:57.725731  261568 crio.go:631] all images are preloaded for cri-o runtime.
	I1228 06:56:57.725752  261568 crio.go:503] Images already preloaded, skipping extraction
	I1228 06:56:57.725791  261568 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 06:56:57.758843  261568 crio.go:631] all images are preloaded for cri-o runtime.
	I1228 06:56:57.758867  261568 cache_images.go:86] Images are preloaded, skipping loading
	I1228 06:56:57.758878  261568 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.35.0 crio true true} ...
	I1228 06:56:57.759067  261568 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-500581 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-500581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 06:56:57.759165  261568 ssh_runner.go:195] Run: crio config
	I1228 06:56:57.825229  261568 cni.go:84] Creating CNI manager for ""
	I1228 06:56:57.825249  261568 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1228 06:56:57.825263  261568 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1228 06:56:57.825283  261568 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-500581 NodeName:default-k8s-diff-port-500581 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 06:56:57.825427  261568 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-500581"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 06:56:57.825488  261568 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1228 06:56:57.834015  261568 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 06:56:57.834104  261568 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 06:56:57.842957  261568 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1228 06:56:57.861130  261568 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 06:56:57.875931  261568 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1228 06:56:57.890937  261568 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1228 06:56:57.894724  261568 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 06:56:57.904606  261568 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:56:58.027677  261568 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:56:58.050675  261568 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/default-k8s-diff-port-500581 for IP: 192.168.103.2
	I1228 06:56:58.050696  261568 certs.go:195] generating shared ca certs ...
	I1228 06:56:58.050715  261568 certs.go:227] acquiring lock for ca certs: {Name:mk77ee411d20e2d367f536371cb4debf1ce5f664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:58.050893  261568 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-5550/.minikube/ca.key
	I1228 06:56:58.050947  261568 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.key
	I1228 06:56:58.050958  261568 certs.go:257] generating profile certs ...
	I1228 06:56:58.051080  261568 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/default-k8s-diff-port-500581/client.key
	I1228 06:56:58.051160  261568 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/default-k8s-diff-port-500581/apiserver.key.4e0fc9ea
	I1228 06:56:58.051212  261568 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/default-k8s-diff-port-500581/proxy-client.key
	I1228 06:56:58.051319  261568 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076.pem (1338 bytes)
	W1228 06:56:58.051361  261568 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076_empty.pem, impossibly tiny 0 bytes
	I1228 06:56:58.051375  261568 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem (1675 bytes)
	I1228 06:56:58.051416  261568 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem (1082 bytes)
	I1228 06:56:58.051453  261568 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem (1123 bytes)
	I1228 06:56:58.051491  261568 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem (1679 bytes)
	I1228 06:56:58.051540  261568 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem (1708 bytes)
	I1228 06:56:58.052173  261568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 06:56:58.074301  261568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1228 06:56:58.094763  261568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 06:56:58.114646  261568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 06:56:58.151474  261568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/default-k8s-diff-port-500581/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1228 06:56:58.178111  261568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/default-k8s-diff-port-500581/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1228 06:56:58.196129  261568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/default-k8s-diff-port-500581/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 06:56:58.225303  261568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/default-k8s-diff-port-500581/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1228 06:56:58.252987  261568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076.pem --> /usr/share/ca-certificates/9076.pem (1338 bytes)
	I1228 06:56:58.275157  261568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem --> /usr/share/ca-certificates/90762.pem (1708 bytes)
	I1228 06:56:58.292772  261568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 06:56:58.324117  261568 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 06:56:58.344196  261568 ssh_runner.go:195] Run: openssl version
	I1228 06:56:58.359329  261568 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9076.pem
	I1228 06:56:58.373180  261568 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9076.pem /etc/ssl/certs/9076.pem
	I1228 06:56:58.388547  261568 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9076.pem
	I1228 06:56:58.397646  261568 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:31 /usr/share/ca-certificates/9076.pem
	I1228 06:56:58.397716  261568 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9076.pem
	I1228 06:56:58.463000  261568 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 06:56:58.472957  261568 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/90762.pem
	I1228 06:56:58.482337  261568 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/90762.pem /etc/ssl/certs/90762.pem
	I1228 06:56:58.493234  261568 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/90762.pem
	I1228 06:56:58.497494  261568 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:31 /usr/share/ca-certificates/90762.pem
	I1228 06:56:58.497554  261568 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/90762.pem
	I1228 06:56:58.554499  261568 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 06:56:58.563535  261568 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:58.571433  261568 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 06:56:58.580593  261568 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:58.586440  261568 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:28 /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:58.586531  261568 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:58.645335  261568 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 06:56:58.658570  261568 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 06:56:58.664780  261568 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1228 06:56:58.731559  261568 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1228 06:56:58.794292  261568 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1228 06:56:58.854366  261568 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1228 06:56:58.912352  261568 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1228 06:56:58.971537  261568 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1228 06:56:59.020042  261568 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-500581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-500581 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:56:59.020173  261568 ssh_runner.go:195] Run: sudo crio config
	I1228 06:56:59.077797  261568 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	W1228 06:56:59.092934  261568 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:59Z" level=error msg="open /run/runc: no such file or directory"
	I1228 06:56:59.093006  261568 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 06:56:59.104271  261568 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1228 06:56:59.104290  261568 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1228 06:56:59.104344  261568 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1228 06:56:59.114137  261568 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1228 06:56:59.115134  261568 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-500581" does not appear in /home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:56:59.115666  261568 kubeconfig.go:62] /home/jenkins/minikube-integration/22352-5550/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-500581" cluster setting kubeconfig missing "default-k8s-diff-port-500581" context setting]
	I1228 06:56:59.116519  261568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/kubeconfig: {Name:mke0b45a06b272ee5aa493b62cdbe6f2a53c0aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:59.118500  261568 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1228 06:56:59.129715  261568 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1228 06:56:59.129755  261568 kubeadm.go:602] duration metric: took 25.457297ms to restartPrimaryControlPlane
	I1228 06:56:59.129767  261568 kubeadm.go:403] duration metric: took 109.746452ms to StartCluster
	I1228 06:56:59.129787  261568 settings.go:142] acquiring lock: {Name:mk84c1d4c127eaf11c7cc5cc16de86f86f2dcbe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:59.129865  261568 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:56:59.131990  261568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/kubeconfig: {Name:mke0b45a06b272ee5aa493b62cdbe6f2a53c0aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:59.132237  261568 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1228 06:56:59.132306  261568 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1228 06:56:59.132422  261568 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-500581"
	I1228 06:56:59.132442  261568 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-500581"
	I1228 06:56:59.132440  261568 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-500581"
	I1228 06:56:59.132458  261568 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-500581"
	I1228 06:56:59.132466  261568 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-500581"
	I1228 06:56:59.132472  261568 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-500581"
	I1228 06:56:59.132501  261568 config.go:182] Loaded profile config "default-k8s-diff-port-500581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	W1228 06:56:59.132476  261568 addons.go:248] addon dashboard should already be in state true
	I1228 06:56:59.132606  261568 host.go:66] Checking if "default-k8s-diff-port-500581" exists ...
	W1228 06:56:59.132451  261568 addons.go:248] addon storage-provisioner should already be in state true
	I1228 06:56:59.132643  261568 host.go:66] Checking if "default-k8s-diff-port-500581" exists ...
	I1228 06:56:59.132804  261568 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-500581 --format={{.State.Status}}
	I1228 06:56:59.133076  261568 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-500581 --format={{.State.Status}}
	I1228 06:56:59.133196  261568 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-500581 --format={{.State.Status}}
	I1228 06:56:59.134412  261568 out.go:179] * Verifying Kubernetes components...
	I1228 06:56:59.135423  261568 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:56:59.160990  261568 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-500581"
	W1228 06:56:59.161019  261568 addons.go:248] addon default-storageclass should already be in state true
	I1228 06:56:59.161062  261568 host.go:66] Checking if "default-k8s-diff-port-500581" exists ...
	I1228 06:56:59.161632  261568 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-500581 --format={{.State.Status}}
	I1228 06:56:59.164387  261568 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1228 06:56:59.164457  261568 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1228 06:56:59.165776  261568 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:56:59.165796  261568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1228 06:56:59.165854  261568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-500581
	I1228 06:56:59.166051  261568 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1228 06:56:57.530689  260915 kubeadm.go:884] updating cluster {Name:newest-cni-479871 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-479871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 06:56:57.530879  260915 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1228 06:56:57.530955  260915 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 06:56:57.573400  260915 crio.go:631] all images are preloaded for cri-o runtime.
	I1228 06:56:57.573424  260915 crio.go:503] Images already preloaded, skipping extraction
	I1228 06:56:57.573472  260915 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 06:56:57.605727  260915 crio.go:631] all images are preloaded for cri-o runtime.
	I1228 06:56:57.605749  260915 cache_images.go:86] Images are preloaded, skipping loading
	I1228 06:56:57.605756  260915 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I1228 06:56:57.605895  260915 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-479871 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-479871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 06:56:57.605982  260915 ssh_runner.go:195] Run: crio config
	I1228 06:56:57.674056  260915 cni.go:84] Creating CNI manager for ""
	I1228 06:56:57.674080  260915 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1228 06:56:57.674097  260915 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1228 06:56:57.674130  260915 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-479871 NodeName:newest-cni-479871 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 06:56:57.674294  260915 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-479871"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 06:56:57.674363  260915 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1228 06:56:57.683718  260915 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 06:56:57.683774  260915 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 06:56:57.697208  260915 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1228 06:56:57.714193  260915 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 06:56:57.736019  260915 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1228 06:56:57.752347  260915 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1228 06:56:57.757444  260915 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 06:56:57.770946  260915 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:56:57.879994  260915 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:56:57.907780  260915 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871 for IP: 192.168.85.2
	I1228 06:56:57.907815  260915 certs.go:195] generating shared ca certs ...
	I1228 06:56:57.907835  260915 certs.go:227] acquiring lock for ca certs: {Name:mk77ee411d20e2d367f536371cb4debf1ce5f664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:57.907990  260915 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-5550/.minikube/ca.key
	I1228 06:56:57.908075  260915 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.key
	I1228 06:56:57.908095  260915 certs.go:257] generating profile certs ...
	I1228 06:56:57.908171  260915 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/client.key
	I1228 06:56:57.908190  260915 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/client.crt with IP's: []
	I1228 06:56:57.970315  260915 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/client.crt ...
	I1228 06:56:57.970351  260915 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/client.crt: {Name:mk342ba4e76ceae6509b3a9b3e06bce76a0143fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:57.970558  260915 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/client.key ...
	I1228 06:56:57.970573  260915 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/client.key: {Name:mk6097687692feb30b71900aa35b4aee9faa2acb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:57.970713  260915 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/apiserver.key.37bd9581
	I1228 06:56:57.970751  260915 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/apiserver.crt.37bd9581 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1228 06:56:58.015745  260915 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/apiserver.crt.37bd9581 ...
	I1228 06:56:58.015774  260915 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/apiserver.crt.37bd9581: {Name:mk60335156a565fa5df02e2632a77039efa4fc0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:58.015954  260915 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/apiserver.key.37bd9581 ...
	I1228 06:56:58.015970  260915 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/apiserver.key.37bd9581: {Name:mk63edb29b1d00cff7e6d926b73407d8754bf39a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:58.016080  260915 certs.go:382] copying /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/apiserver.crt.37bd9581 -> /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/apiserver.crt
	I1228 06:56:58.016188  260915 certs.go:386] copying /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/apiserver.key.37bd9581 -> /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/apiserver.key
	I1228 06:56:58.016281  260915 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/proxy-client.key
	I1228 06:56:58.016305  260915 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/proxy-client.crt with IP's: []
	I1228 06:56:58.169217  260915 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/proxy-client.crt ...
	I1228 06:56:58.169306  260915 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/proxy-client.crt: {Name:mk5ba8b17c1f71db6636f0d33f2f72040423ed3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:58.169505  260915 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/proxy-client.key ...
	I1228 06:56:58.169521  260915 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/proxy-client.key: {Name:mk4b0b0f3f2c0acfd0e4e41f4c53c10301c4aab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:56:58.169760  260915 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076.pem (1338 bytes)
	W1228 06:56:58.169804  260915 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076_empty.pem, impossibly tiny 0 bytes
	I1228 06:56:58.169816  260915 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem (1675 bytes)
	I1228 06:56:58.169857  260915 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem (1082 bytes)
	I1228 06:56:58.169919  260915 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem (1123 bytes)
	I1228 06:56:58.169960  260915 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem (1679 bytes)
	I1228 06:56:58.170023  260915 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem (1708 bytes)
	I1228 06:56:58.170853  260915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 06:56:58.189272  260915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1228 06:56:58.211984  260915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 06:56:58.244360  260915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 06:56:58.268746  260915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1228 06:56:58.287410  260915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1228 06:56:58.315271  260915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 06:56:58.346205  260915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1228 06:56:58.384409  260915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem --> /usr/share/ca-certificates/90762.pem (1708 bytes)
	I1228 06:56:58.419149  260915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 06:56:58.454023  260915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076.pem --> /usr/share/ca-certificates/9076.pem (1338 bytes)
	I1228 06:56:58.476345  260915 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 06:56:58.493349  260915 ssh_runner.go:195] Run: openssl version
	I1228 06:56:58.500769  260915 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/90762.pem
	I1228 06:56:58.510854  260915 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/90762.pem /etc/ssl/certs/90762.pem
	I1228 06:56:58.521404  260915 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/90762.pem
	I1228 06:56:58.526814  260915 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:31 /usr/share/ca-certificates/90762.pem
	I1228 06:56:58.526893  260915 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/90762.pem
	I1228 06:56:58.579536  260915 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 06:56:58.591726  260915 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/90762.pem /etc/ssl/certs/3ec20f2e.0
	I1228 06:56:58.603715  260915 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:58.613518  260915 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 06:56:58.622954  260915 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:58.627431  260915 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:28 /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:58.627487  260915 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:56:58.687477  260915 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 06:56:58.699073  260915 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1228 06:56:58.710948  260915 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9076.pem
	I1228 06:56:58.722754  260915 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9076.pem /etc/ssl/certs/9076.pem
	I1228 06:56:58.735944  260915 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9076.pem
	I1228 06:56:58.741915  260915 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:31 /usr/share/ca-certificates/9076.pem
	I1228 06:56:58.741988  260915 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9076.pem
	I1228 06:56:58.800642  260915 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 06:56:58.811409  260915 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9076.pem /etc/ssl/certs/51391683.0
	I1228 06:56:58.823986  260915 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 06:56:58.829294  260915 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1228 06:56:58.829413  260915 kubeadm.go:401] StartCluster: {Name:newest-cni-479871 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-479871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:56:58.829571  260915 ssh_runner.go:195] Run: sudo crio config
	I1228 06:56:58.913584  260915 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	W1228 06:56:58.932081  260915 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:56:58Z" level=error msg="open /run/runc: no such file or directory"
	I1228 06:56:58.932154  260915 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 06:56:58.942180  260915 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1228 06:56:58.953694  260915 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1228 06:56:58.953794  260915 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1228 06:56:58.962855  260915 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1228 06:56:58.962880  260915 kubeadm.go:158] found existing configuration files:
	
	I1228 06:56:58.962926  260915 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1228 06:56:58.972496  260915 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1228 06:56:58.972534  260915 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1228 06:56:58.980676  260915 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1228 06:56:58.991072  260915 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1228 06:56:58.991204  260915 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1228 06:56:58.999651  260915 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1228 06:56:59.008281  260915 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1228 06:56:59.008349  260915 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1228 06:56:59.016399  260915 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1228 06:56:59.024902  260915 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1228 06:56:59.024962  260915 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1228 06:56:59.032507  260915 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1228 06:56:59.203193  260915 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1228 06:56:59.293476  260915 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1228 06:56:59.167161  261568 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1228 06:56:59.167190  261568 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1228 06:56:59.167250  261568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-500581
	I1228 06:56:59.195102  261568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/default-k8s-diff-port-500581/id_rsa Username:docker}
	I1228 06:56:59.209215  261568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/default-k8s-diff-port-500581/id_rsa Username:docker}
	I1228 06:56:59.213140  261568 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1228 06:56:59.213164  261568 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1228 06:56:59.213251  261568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-500581
	I1228 06:56:59.240235  261568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/default-k8s-diff-port-500581/id_rsa Username:docker}
	I1228 06:56:59.296939  261568 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:56:59.314285  261568 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-500581" to be "Ready" ...
	I1228 06:56:59.324792  261568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:56:59.342466  261568 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1228 06:56:59.342613  261568 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1228 06:56:59.359906  261568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1228 06:56:59.364010  261568 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1228 06:56:59.364045  261568 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1228 06:56:59.391472  261568 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1228 06:56:59.391508  261568 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1228 06:56:59.444439  261568 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1228 06:56:59.444465  261568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1228 06:56:59.472399  261568 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1228 06:56:59.472451  261568 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1228 06:56:59.491671  261568 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1228 06:56:59.491775  261568 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1228 06:56:59.516085  261568 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1228 06:56:59.516120  261568 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1228 06:56:59.540413  261568 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1228 06:56:59.540444  261568 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1228 06:56:59.563645  261568 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 06:56:59.563672  261568 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1228 06:56:59.581659  261568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 06:57:00.542003  261568 node_ready.go:49] node "default-k8s-diff-port-500581" is "Ready"
	I1228 06:57:00.542057  261568 node_ready.go:38] duration metric: took 1.227733507s for node "default-k8s-diff-port-500581" to be "Ready" ...
	I1228 06:57:00.542077  261568 api_server.go:52] waiting for apiserver process to appear ...
	I1228 06:57:00.542135  261568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 06:57:01.105548  261568 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.780701527s)
	I1228 06:57:01.105609  261568 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.745665986s)
	I1228 06:57:01.105694  261568 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.523998963s)
	I1228 06:57:01.105746  261568 api_server.go:72] duration metric: took 1.973482037s to wait for apiserver process to appear ...
	I1228 06:57:01.105763  261568 api_server.go:88] waiting for apiserver healthz status ...
	I1228 06:57:01.105885  261568 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1228 06:57:01.107453  261568 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-500581 addons enable metrics-server
	
	I1228 06:57:01.110897  261568 api_server.go:325] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1228 06:57:01.110919  261568 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1228 06:57:01.112410  261568 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1228 06:57:01.113682  261568 addons.go:530] duration metric: took 1.981384906s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1228 06:56:59.097263  260283 addons.go:530] duration metric: took 2.160064919s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1228 06:56:59.581199  260283 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1228 06:56:59.589461  260283 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1228 06:56:59.589517  260283 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1228 06:57:00.081170  260283 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1228 06:57:00.085345  260283 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1228 06:57:00.086368  260283 api_server.go:141] control plane version: v1.35.0
	I1228 06:57:00.086398  260283 api_server.go:131] duration metric: took 1.006128416s to wait for apiserver health ...
	I1228 06:57:00.086409  260283 system_pods.go:43] waiting for kube-system pods to appear ...
	I1228 06:57:00.090076  260283 system_pods.go:59] 8 kube-system pods found
	I1228 06:57:00.090113  260283 system_pods.go:61] "coredns-7d764666f9-dmhdv" [73a84260-cf19-47c9-a23e-616f99cb5f38] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:57:00.090124  260283 system_pods.go:61] "etcd-embed-certs-422591" [fa26dd24-514f-4ada-b710-7e65e52ccd9e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 06:57:00.090138  260283 system_pods.go:61] "kindnet-9zxtp" [e5bd678d-7d09-455f-993e-2fb6a4f02111] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 06:57:00.090151  260283 system_pods.go:61] "kube-apiserver-embed-certs-422591" [935ffe6b-7ba7-4580-b22b-b76cd5469ae8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 06:57:00.090162  260283 system_pods.go:61] "kube-controller-manager-embed-certs-422591" [39f6d823-b0f9-44c0-be26-47e85a60ca63] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 06:57:00.090186  260283 system_pods.go:61] "kube-proxy-j2dkd" [f64a0ce0-4a8f-4d7d-aac7-cce99ec2bdd8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 06:57:00.090199  260283 system_pods.go:61] "kube-scheduler-embed-certs-422591" [ba6d5c13-ea17-4f1d-bcc9-c57e4e7ce995] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 06:57:00.090212  260283 system_pods.go:61] "storage-provisioner" [ac0163fe-8dd0-4650-a401-22a9a9310b5e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:57:00.090223  260283 system_pods.go:74] duration metric: took 3.804246ms to wait for pod list to return data ...
	I1228 06:57:00.090236  260283 default_sa.go:34] waiting for default service account to be created ...
	I1228 06:57:00.092690  260283 default_sa.go:45] found service account: "default"
	I1228 06:57:00.092707  260283 default_sa.go:55] duration metric: took 2.461167ms for default service account to be created ...
	I1228 06:57:00.092720  260283 system_pods.go:116] waiting for k8s-apps to be running ...
	I1228 06:57:00.095179  260283 system_pods.go:86] 8 kube-system pods found
	I1228 06:57:00.095212  260283 system_pods.go:89] "coredns-7d764666f9-dmhdv" [73a84260-cf19-47c9-a23e-616f99cb5f38] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:57:00.095224  260283 system_pods.go:89] "etcd-embed-certs-422591" [fa26dd24-514f-4ada-b710-7e65e52ccd9e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 06:57:00.095245  260283 system_pods.go:89] "kindnet-9zxtp" [e5bd678d-7d09-455f-993e-2fb6a4f02111] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 06:57:00.095258  260283 system_pods.go:89] "kube-apiserver-embed-certs-422591" [935ffe6b-7ba7-4580-b22b-b76cd5469ae8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 06:57:00.095267  260283 system_pods.go:89] "kube-controller-manager-embed-certs-422591" [39f6d823-b0f9-44c0-be26-47e85a60ca63] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 06:57:00.095278  260283 system_pods.go:89] "kube-proxy-j2dkd" [f64a0ce0-4a8f-4d7d-aac7-cce99ec2bdd8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 06:57:00.095286  260283 system_pods.go:89] "kube-scheduler-embed-certs-422591" [ba6d5c13-ea17-4f1d-bcc9-c57e4e7ce995] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 06:57:00.095297  260283 system_pods.go:89] "storage-provisioner" [ac0163fe-8dd0-4650-a401-22a9a9310b5e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:57:00.095307  260283 system_pods.go:126] duration metric: took 2.57702ms to wait for k8s-apps to be running ...
	I1228 06:57:00.095319  260283 system_svc.go:44] waiting for kubelet service to be running ....
	I1228 06:57:00.095369  260283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:57:00.112536  260283 system_svc.go:56] duration metric: took 17.190631ms WaitForService to wait for kubelet
	I1228 06:57:00.112574  260283 kubeadm.go:587] duration metric: took 3.175583293s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 06:57:00.112597  260283 node_conditions.go:102] verifying NodePressure condition ...
	I1228 06:57:00.117248  260283 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1228 06:57:00.117423  260283 node_conditions.go:123] node cpu capacity is 8
	I1228 06:57:00.117486  260283 node_conditions.go:105] duration metric: took 4.86014ms to run NodePressure ...
	I1228 06:57:00.117528  260283 start.go:242] waiting for startup goroutines ...
	I1228 06:57:00.117683  260283 start.go:247] waiting for cluster config update ...
	I1228 06:57:00.117705  260283 start.go:256] writing updated cluster config ...
	I1228 06:57:00.118280  260283 ssh_runner.go:195] Run: rm -f paused
	I1228 06:57:00.124948  260283 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 06:57:00.129371  260283 pod_ready.go:83] waiting for pod "coredns-7d764666f9-dmhdv" in "kube-system" namespace to be "Ready" or be gone ...
	W1228 06:57:02.139240  260283 pod_ready.go:104] pod "coredns-7d764666f9-dmhdv" is not "Ready", error: <nil>
	I1228 06:57:01.606775  261568 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1228 06:57:01.611458  261568 api_server.go:325] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1228 06:57:01.611490  261568 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1228 06:57:02.106018  261568 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1228 06:57:02.112713  261568 api_server.go:325] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1228 06:57:02.114062  261568 api_server.go:141] control plane version: v1.35.0
	I1228 06:57:02.114087  261568 api_server.go:131] duration metric: took 1.008258851s to wait for apiserver health ...
	I1228 06:57:02.114096  261568 system_pods.go:43] waiting for kube-system pods to appear ...
	I1228 06:57:02.118560  261568 system_pods.go:59] 8 kube-system pods found
	I1228 06:57:02.118604  261568 system_pods.go:61] "coredns-7d764666f9-9glh9" [9c46cd6f-643c-4dcd-9ffd-88becb063b24] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:57:02.118620  261568 system_pods.go:61] "etcd-default-k8s-diff-port-500581" [03cb3e5a-cdc5-4c45-983f-05b0f5326abe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 06:57:02.118631  261568 system_pods.go:61] "kindnet-lsrww" [b5bd3c1b-325d-46fe-9378-779822f0ba5b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 06:57:02.118640  261568 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-500581" [8a2ed107-3bb0-4c8c-bdff-d7011ae71058] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 06:57:02.118651  261568 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-500581" [933ddecb-a0d9-44c2-abb5-d7629a8107eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 06:57:02.118660  261568 system_pods.go:61] "kube-proxy-95gmh" [f25e4b21-a201-4838-b7c9-a5fde3304662] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 06:57:02.118668  261568 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-500581" [35caf688-1337-46ff-b025-5d59373ae8e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 06:57:02.118676  261568 system_pods.go:61] "storage-provisioner" [12b3784f-bffe-49c1-8915-2011c07bee4e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:57:02.118685  261568 system_pods.go:74] duration metric: took 4.581477ms to wait for pod list to return data ...
	I1228 06:57:02.118694  261568 default_sa.go:34] waiting for default service account to be created ...
	I1228 06:57:02.122002  261568 default_sa.go:45] found service account: "default"
	I1228 06:57:02.122020  261568 default_sa.go:55] duration metric: took 3.320928ms for default service account to be created ...
	I1228 06:57:02.122039  261568 system_pods.go:116] waiting for k8s-apps to be running ...
	I1228 06:57:02.125517  261568 system_pods.go:86] 8 kube-system pods found
	I1228 06:57:02.125558  261568 system_pods.go:89] "coredns-7d764666f9-9glh9" [9c46cd6f-643c-4dcd-9ffd-88becb063b24] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1228 06:57:02.125571  261568 system_pods.go:89] "etcd-default-k8s-diff-port-500581" [03cb3e5a-cdc5-4c45-983f-05b0f5326abe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 06:57:02.125594  261568 system_pods.go:89] "kindnet-lsrww" [b5bd3c1b-325d-46fe-9378-779822f0ba5b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 06:57:02.125607  261568 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-500581" [8a2ed107-3bb0-4c8c-bdff-d7011ae71058] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 06:57:02.125619  261568 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-500581" [933ddecb-a0d9-44c2-abb5-d7629a8107eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 06:57:02.125628  261568 system_pods.go:89] "kube-proxy-95gmh" [f25e4b21-a201-4838-b7c9-a5fde3304662] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 06:57:02.125643  261568 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-500581" [35caf688-1337-46ff-b025-5d59373ae8e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 06:57:02.125650  261568 system_pods.go:89] "storage-provisioner" [12b3784f-bffe-49c1-8915-2011c07bee4e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1228 06:57:02.125663  261568 system_pods.go:126] duration metric: took 3.61618ms to wait for k8s-apps to be running ...
	I1228 06:57:02.125675  261568 system_svc.go:44] waiting for kubelet service to be running ....
	I1228 06:57:02.125723  261568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:57:02.146516  261568 system_svc.go:56] duration metric: took 20.829772ms WaitForService to wait for kubelet
	I1228 06:57:02.146548  261568 kubeadm.go:587] duration metric: took 3.014284503s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 06:57:02.146571  261568 node_conditions.go:102] verifying NodePressure condition ...
	I1228 06:57:02.151142  261568 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1228 06:57:02.151173  261568 node_conditions.go:123] node cpu capacity is 8
	I1228 06:57:02.151191  261568 node_conditions.go:105] duration metric: took 4.614814ms to run NodePressure ...
	I1228 06:57:02.151206  261568 start.go:242] waiting for startup goroutines ...
	I1228 06:57:02.151215  261568 start.go:247] waiting for cluster config update ...
	I1228 06:57:02.151228  261568 start.go:256] writing updated cluster config ...
	I1228 06:57:02.151492  261568 ssh_runner.go:195] Run: rm -f paused
	I1228 06:57:02.158502  261568 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 06:57:02.163107  261568 pod_ready.go:83] waiting for pod "coredns-7d764666f9-9glh9" in "kube-system" namespace to be "Ready" or be gone ...
	W1228 06:57:04.168739  261568 pod_ready.go:104] pod "coredns-7d764666f9-9glh9" is not "Ready", error: <nil>
	W1228 06:57:06.170937  261568 pod_ready.go:104] pod "coredns-7d764666f9-9glh9" is not "Ready", error: <nil>
	I1228 06:57:07.273248  260915 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1228 06:57:07.273330  260915 kubeadm.go:319] [preflight] Running pre-flight checks
	I1228 06:57:07.273447  260915 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1228 06:57:07.273543  260915 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1228 06:57:07.273595  260915 kubeadm.go:319] OS: Linux
	I1228 06:57:07.273651  260915 kubeadm.go:319] CGROUPS_CPU: enabled
	I1228 06:57:07.273709  260915 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1228 06:57:07.273771  260915 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1228 06:57:07.273835  260915 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1228 06:57:07.273916  260915 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1228 06:57:07.273992  260915 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1228 06:57:07.274078  260915 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1228 06:57:07.274138  260915 kubeadm.go:319] CGROUPS_IO: enabled
	I1228 06:57:07.274235  260915 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1228 06:57:07.274357  260915 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1228 06:57:07.274477  260915 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1228 06:57:07.274563  260915 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1228 06:57:07.276237  260915 out.go:252]   - Generating certificates and keys ...
	I1228 06:57:07.276338  260915 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1228 06:57:07.276435  260915 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1228 06:57:07.276531  260915 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1228 06:57:07.276613  260915 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1228 06:57:07.276715  260915 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1228 06:57:07.276790  260915 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1228 06:57:07.276871  260915 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1228 06:57:07.277062  260915 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-479871] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1228 06:57:07.277160  260915 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1228 06:57:07.277338  260915 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-479871] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1228 06:57:07.277431  260915 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1228 06:57:07.277519  260915 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1228 06:57:07.277582  260915 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1228 06:57:07.277660  260915 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1228 06:57:07.277726  260915 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1228 06:57:07.277802  260915 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1228 06:57:07.277871  260915 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1228 06:57:07.277975  260915 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1228 06:57:07.278078  260915 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1228 06:57:07.278183  260915 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1228 06:57:07.278271  260915 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1228 06:57:07.279768  260915 out.go:252]   - Booting up control plane ...
	I1228 06:57:07.279971  260915 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1228 06:57:07.280118  260915 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1228 06:57:07.280203  260915 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1228 06:57:07.280341  260915 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1228 06:57:07.280459  260915 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1228 06:57:07.280594  260915 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1228 06:57:07.280705  260915 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1228 06:57:07.280752  260915 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1228 06:57:07.280918  260915 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1228 06:57:07.281066  260915 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1228 06:57:07.281146  260915 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.62379ms
	I1228 06:57:07.281264  260915 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1228 06:57:07.281347  260915 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1228 06:57:07.281414  260915 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1228 06:57:07.281473  260915 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1228 06:57:07.281553  260915 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.006106358s
	I1228 06:57:07.281644  260915 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.100872978s
	I1228 06:57:07.281739  260915 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001834302s
	I1228 06:57:07.281997  260915 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1228 06:57:07.282187  260915 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1228 06:57:07.282270  260915 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1228 06:57:07.282522  260915 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-479871 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1228 06:57:07.282694  260915 kubeadm.go:319] [bootstrap-token] Using token: 1h1kon.f0uwfkf8goxau87f
	I1228 06:57:07.285641  260915 out.go:252]   - Configuring RBAC rules ...
	I1228 06:57:07.285801  260915 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1228 06:57:07.285940  260915 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1228 06:57:07.286155  260915 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1228 06:57:07.286341  260915 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1228 06:57:07.286509  260915 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1228 06:57:07.286626  260915 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1228 06:57:07.286789  260915 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1228 06:57:07.286944  260915 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1228 06:57:07.287022  260915 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1228 06:57:07.287050  260915 kubeadm.go:319] 
	I1228 06:57:07.287134  260915 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1228 06:57:07.287148  260915 kubeadm.go:319] 
	I1228 06:57:07.287240  260915 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1228 06:57:07.287251  260915 kubeadm.go:319] 
	I1228 06:57:07.287284  260915 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1228 06:57:07.287366  260915 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1228 06:57:07.287440  260915 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1228 06:57:07.287451  260915 kubeadm.go:319] 
	I1228 06:57:07.287527  260915 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1228 06:57:07.287537  260915 kubeadm.go:319] 
	I1228 06:57:07.287606  260915 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1228 06:57:07.287615  260915 kubeadm.go:319] 
	I1228 06:57:07.287692  260915 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1228 06:57:07.287797  260915 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1228 06:57:07.287900  260915 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1228 06:57:07.287911  260915 kubeadm.go:319] 
	I1228 06:57:07.288018  260915 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1228 06:57:07.288149  260915 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1228 06:57:07.288163  260915 kubeadm.go:319] 
	I1228 06:57:07.288271  260915 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 1h1kon.f0uwfkf8goxau87f \
	I1228 06:57:07.288398  260915 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6534497fd09654e1c9f62bf7a6763f446292593a08619861d4eab5a65759d2d4 \
	I1228 06:57:07.288433  260915 kubeadm.go:319] 	--control-plane 
	I1228 06:57:07.288450  260915 kubeadm.go:319] 
	I1228 06:57:07.288562  260915 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1228 06:57:07.288578  260915 kubeadm.go:319] 
	I1228 06:57:07.288682  260915 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 1h1kon.f0uwfkf8goxau87f \
	I1228 06:57:07.288837  260915 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6534497fd09654e1c9f62bf7a6763f446292593a08619861d4eab5a65759d2d4 
	I1228 06:57:07.288863  260915 cni.go:84] Creating CNI manager for ""
	I1228 06:57:07.288884  260915 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1228 06:57:07.290538  260915 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1228 06:57:04.636200  260283 pod_ready.go:104] pod "coredns-7d764666f9-dmhdv" is not "Ready", error: <nil>
	W1228 06:57:06.636940  260283 pod_ready.go:104] pod "coredns-7d764666f9-dmhdv" is not "Ready", error: <nil>
	I1228 06:57:07.291873  260915 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1228 06:57:07.298126  260915 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1228 06:57:07.298146  260915 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1228 06:57:07.319436  260915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1228 06:57:07.645417  260915 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1228 06:57:07.645491  260915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-479871 minikube.k8s.io/updated_at=2025_12_28T06_57_07_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba minikube.k8s.io/name=newest-cni-479871 minikube.k8s.io/primary=true
	I1228 06:57:07.645603  260915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:57:07.785117  260915 ops.go:34] apiserver oom_adj: -16
	I1228 06:57:07.785122  260915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:57:08.285590  260915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:57:08.785995  260915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:57:09.285435  260915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:57:09.785188  260915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1228 06:57:08.671402  261568 pod_ready.go:104] pod "coredns-7d764666f9-9glh9" is not "Ready", error: <nil>
	W1228 06:57:10.673458  261568 pod_ready.go:104] pod "coredns-7d764666f9-9glh9" is not "Ready", error: <nil>
	I1228 06:57:10.285783  260915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:57:10.785938  260915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:57:11.285397  260915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:57:11.785451  260915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:57:11.861999  260915 kubeadm.go:1114] duration metric: took 4.216629312s to wait for elevateKubeSystemPrivileges
	I1228 06:57:11.862088  260915 kubeadm.go:403] duration metric: took 13.032677581s to StartCluster
	I1228 06:57:11.862111  260915 settings.go:142] acquiring lock: {Name:mk84c1d4c127eaf11c7cc5cc16de86f86f2dcbe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:57:11.862308  260915 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:57:11.864955  260915 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/kubeconfig: {Name:mke0b45a06b272ee5aa493b62cdbe6f2a53c0aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:57:11.865249  260915 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1228 06:57:11.865367  260915 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1228 06:57:11.865643  260915 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1228 06:57:11.865724  260915 config.go:182] Loaded profile config "newest-cni-479871": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:57:11.865736  260915 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-479871"
	I1228 06:57:11.865753  260915 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-479871"
	I1228 06:57:11.865782  260915 host.go:66] Checking if "newest-cni-479871" exists ...
	I1228 06:57:11.865784  260915 addons.go:70] Setting default-storageclass=true in profile "newest-cni-479871"
	I1228 06:57:11.865799  260915 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-479871"
	I1228 06:57:11.866154  260915 cli_runner.go:164] Run: docker container inspect newest-cni-479871 --format={{.State.Status}}
	I1228 06:57:11.866403  260915 cli_runner.go:164] Run: docker container inspect newest-cni-479871 --format={{.State.Status}}
	I1228 06:57:11.867390  260915 out.go:179] * Verifying Kubernetes components...
	I1228 06:57:11.868587  260915 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:57:11.893738  260915 addons.go:239] Setting addon default-storageclass=true in "newest-cni-479871"
	I1228 06:57:11.893776  260915 host.go:66] Checking if "newest-cni-479871" exists ...
	I1228 06:57:11.894248  260915 cli_runner.go:164] Run: docker container inspect newest-cni-479871 --format={{.State.Status}}
	I1228 06:57:11.894552  260915 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1228 06:57:11.896093  260915 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:57:11.896115  260915 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1228 06:57:11.896177  260915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:57:11.927222  260915 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1228 06:57:11.927248  260915 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1228 06:57:11.927322  260915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:57:11.928405  260915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/newest-cni-479871/id_rsa Username:docker}
	I1228 06:57:11.965132  260915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/newest-cni-479871/id_rsa Username:docker}
	I1228 06:57:11.990719  260915 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1228 06:57:12.039886  260915 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:57:12.049743  260915 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:57:12.086577  260915 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1228 06:57:12.189094  260915 start.go:987] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1228 06:57:12.190197  260915 api_server.go:52] waiting for apiserver process to appear ...
	I1228 06:57:12.190260  260915 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 06:57:12.404986  260915 api_server.go:72] duration metric: took 539.699676ms to wait for apiserver process to appear ...
	I1228 06:57:12.405015  260915 api_server.go:88] waiting for apiserver healthz status ...
	I1228 06:57:12.405067  260915 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1228 06:57:12.410709  260915 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1228 06:57:12.411987  260915 api_server.go:141] control plane version: v1.35.0
	I1228 06:57:12.412017  260915 api_server.go:131] duration metric: took 6.99389ms to wait for apiserver health ...
	I1228 06:57:12.412084  260915 system_pods.go:43] waiting for kube-system pods to appear ...
	I1228 06:57:12.412431  260915 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1228 06:57:12.413844  260915 addons.go:530] duration metric: took 548.200751ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1228 06:57:12.415402  260915 system_pods.go:59] 8 kube-system pods found
	I1228 06:57:12.415430  260915 system_pods.go:61] "coredns-7d764666f9-cqtm4" [80bee88e-62a5-413c-9e2b-0cc274cf605d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1228 06:57:12.415437  260915 system_pods.go:61] "etcd-newest-cni-479871" [8bb011cd-dd9f-4176-b43a-5629132fbf66] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 06:57:12.415446  260915 system_pods.go:61] "kindnet-74fnf" [f610ca19-f52f-41ef-90d7-6ae6b47445da] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 06:57:12.415462  260915 system_pods.go:61] "kube-apiserver-newest-cni-479871" [a83949b2-d4ff-40cb-b0de-d4ba8547a489] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 06:57:12.415469  260915 system_pods.go:61] "kube-controller-manager-newest-cni-479871" [018c9a7d-7992-49db-afd0-8acc014b1976] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 06:57:12.415477  260915 system_pods.go:61] "kube-proxy-kzkbr" [a72ff074-7d43-4ea4-b42a-3a8e5e5fea1d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 06:57:12.415484  260915 system_pods.go:61] "kube-scheduler-newest-cni-479871" [85dcc815-30f1-4c70-a83a-08ca392957f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 06:57:12.415490  260915 system_pods.go:61] "storage-provisioner" [267e9641-510e-4fac-a7f3-97501d5ada65] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1228 06:57:12.415498  260915 system_pods.go:74] duration metric: took 3.401244ms to wait for pod list to return data ...
	I1228 06:57:12.415506  260915 default_sa.go:34] waiting for default service account to be created ...
	I1228 06:57:12.417774  260915 default_sa.go:45] found service account: "default"
	I1228 06:57:12.417795  260915 default_sa.go:55] duration metric: took 2.281764ms for default service account to be created ...
	I1228 06:57:12.417808  260915 kubeadm.go:587] duration metric: took 552.527471ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1228 06:57:12.417828  260915 node_conditions.go:102] verifying NodePressure condition ...
	I1228 06:57:12.420434  260915 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1228 06:57:12.420458  260915 node_conditions.go:123] node cpu capacity is 8
	I1228 06:57:12.420471  260915 node_conditions.go:105] duration metric: took 2.637801ms to run NodePressure ...
	I1228 06:57:12.420484  260915 start.go:242] waiting for startup goroutines ...
	I1228 06:57:12.694659  260915 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-479871" context rescaled to 1 replicas
	I1228 06:57:12.694709  260915 start.go:247] waiting for cluster config update ...
	I1228 06:57:12.694726  260915 start.go:256] writing updated cluster config ...
	I1228 06:57:12.695085  260915 ssh_runner.go:195] Run: rm -f paused
	I1228 06:57:12.764992  260915 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1228 06:57:12.767272  260915 out.go:179] * Done! kubectl is now configured to use "newest-cni-479871" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 28 06:57:12 newest-cni-479871 crio[770]: time="2025-12-28T06:57:12.229058929Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=39a0d6d8-2b60-47c9-883c-6fc7303fcfe4 name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:57:12 newest-cni-479871 crio[770]: time="2025-12-28T06:57:12.229673614Z" level=info msg="Ran pod sandbox 07a2fc56fcd1d82016cf7ad1d47605694c69663903e6ca815686e4c821802287 with infra container: kube-system/kindnet-74fnf/POD" id=64b4bef4-553c-443e-a4fd-e7355b5eabbb name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 28 06:57:12 newest-cni-479871 crio[770]: time="2025-12-28T06:57:12.23075418Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=242b7174-f01f-4974-a75f-fa7bebe64fd4 name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:57:12 newest-cni-479871 crio[770]: time="2025-12-28T06:57:12.232181986Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=0a930d20-b1e7-4f40-9252-f8b20107dcfc name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:57:12 newest-cni-479871 crio[770]: time="2025-12-28T06:57:12.232408506Z" level=info msg="Image docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88 not found" id=0a930d20-b1e7-4f40-9252-f8b20107dcfc name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:57:12 newest-cni-479871 crio[770]: time="2025-12-28T06:57:12.232607756Z" level=info msg="Neither image nor artfiact docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88 found" id=0a930d20-b1e7-4f40-9252-f8b20107dcfc name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:57:12 newest-cni-479871 crio[770]: time="2025-12-28T06:57:12.234552443Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=43ea6c8d-6d78-4cbe-9ef5-d0d196e42554 name=/runtime.v1.ImageService/PullImage
	Dec 28 06:57:12 newest-cni-479871 crio[770]: time="2025-12-28T06:57:12.235235185Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88\""
	Dec 28 06:57:12 newest-cni-479871 crio[770]: time="2025-12-28T06:57:12.236673405Z" level=info msg="Creating container: kube-system/kube-proxy-kzkbr/kube-proxy" id=7583a0f8-6df7-4e2f-8666-0f39aacb6e73 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:57:12 newest-cni-479871 crio[770]: time="2025-12-28T06:57:12.236815268Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:12 newest-cni-479871 crio[770]: time="2025-12-28T06:57:12.243088904Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:12 newest-cni-479871 crio[770]: time="2025-12-28T06:57:12.243773668Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:12 newest-cni-479871 crio[770]: time="2025-12-28T06:57:12.302587288Z" level=info msg="Created container 80e2f69491beb9d8231b5cec628343df5cfb57b4cdc3664931c02988a1503398: kube-system/kube-proxy-kzkbr/kube-proxy" id=7583a0f8-6df7-4e2f-8666-0f39aacb6e73 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:57:12 newest-cni-479871 crio[770]: time="2025-12-28T06:57:12.303386922Z" level=info msg="Starting container: 80e2f69491beb9d8231b5cec628343df5cfb57b4cdc3664931c02988a1503398" id=0eb359e2-fdff-4c35-9f37-4c6386c2168d name=/runtime.v1.RuntimeService/StartContainer
	Dec 28 06:57:12 newest-cni-479871 crio[770]: time="2025-12-28T06:57:12.30630491Z" level=info msg="Started container" PID=1585 containerID=80e2f69491beb9d8231b5cec628343df5cfb57b4cdc3664931c02988a1503398 description=kube-system/kube-proxy-kzkbr/kube-proxy id=0eb359e2-fdff-4c35-9f37-4c6386c2168d name=/runtime.v1.RuntimeService/StartContainer sandboxID=a31522fec43c29a2b15132852904d1517f392086ceeb3a7e05417623b89706cb
	Dec 28 06:57:13 newest-cni-479871 crio[770]: time="2025-12-28T06:57:13.57335946Z" level=info msg="Pulled image: docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27" id=43ea6c8d-6d78-4cbe-9ef5-d0d196e42554 name=/runtime.v1.ImageService/PullImage
	Dec 28 06:57:13 newest-cni-479871 crio[770]: time="2025-12-28T06:57:13.574270686Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=5ff125b8-9937-4043-b80b-c3d92f198441 name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:57:13 newest-cni-479871 crio[770]: time="2025-12-28T06:57:13.576534409Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=3254b96e-8e8f-4c48-a312-e4449f4a6c22 name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:57:13 newest-cni-479871 crio[770]: time="2025-12-28T06:57:13.580140499Z" level=info msg="Creating container: kube-system/kindnet-74fnf/kindnet-cni" id=f17b8775-b692-4a36-aba7-2fcb96fe6300 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:57:13 newest-cni-479871 crio[770]: time="2025-12-28T06:57:13.58026884Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:13 newest-cni-479871 crio[770]: time="2025-12-28T06:57:13.585205018Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:13 newest-cni-479871 crio[770]: time="2025-12-28T06:57:13.585683388Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:13 newest-cni-479871 crio[770]: time="2025-12-28T06:57:13.618854425Z" level=info msg="Created container ce66f3e9f799276a584fc5d317bdf924e0370f7187f922540b5c779a6c2b59f4: kube-system/kindnet-74fnf/kindnet-cni" id=f17b8775-b692-4a36-aba7-2fcb96fe6300 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:57:13 newest-cni-479871 crio[770]: time="2025-12-28T06:57:13.619508239Z" level=info msg="Starting container: ce66f3e9f799276a584fc5d317bdf924e0370f7187f922540b5c779a6c2b59f4" id=6c729fe5-7331-4325-9395-d1ebd8e1d582 name=/runtime.v1.RuntimeService/StartContainer
	Dec 28 06:57:13 newest-cni-479871 crio[770]: time="2025-12-28T06:57:13.62134114Z" level=info msg="Started container" PID=1833 containerID=ce66f3e9f799276a584fc5d317bdf924e0370f7187f922540b5c779a6c2b59f4 description=kube-system/kindnet-74fnf/kindnet-cni id=6c729fe5-7331-4325-9395-d1ebd8e1d582 name=/runtime.v1.RuntimeService/StartContainer sandboxID=07a2fc56fcd1d82016cf7ad1d47605694c69663903e6ca815686e4c821802287
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	ce66f3e9f7992       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27   Less than a second ago   Running             kindnet-cni               0                   07a2fc56fcd1d       kindnet-74fnf                               kube-system
	80e2f69491beb       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                     2 seconds ago            Running             kube-proxy                0                   a31522fec43c2       kube-proxy-kzkbr                            kube-system
	9b91a094dd59c       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                     12 seconds ago           Running             kube-scheduler            0                   be13d53d87486       kube-scheduler-newest-cni-479871            kube-system
	c9331ba517b81       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                     12 seconds ago           Running             kube-controller-manager   0                   184fc3888aa06       kube-controller-manager-newest-cni-479871   kube-system
	7cfdaea284828       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                     12 seconds ago           Running             etcd                      0                   e71bef94b13e5       etcd-newest-cni-479871                      kube-system
	5fb31edf9a403       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                     12 seconds ago           Running             kube-apiserver            0                   3aa3eb330ec4f       kube-apiserver-newest-cni-479871            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-479871
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-479871
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba
	                    minikube.k8s.io/name=newest-cni-479871
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_28T06_57_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Dec 2025 06:57:04 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-479871
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 28 Dec 2025 06:57:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 28 Dec 2025 06:57:06 +0000   Sun, 28 Dec 2025 06:57:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 28 Dec 2025 06:57:06 +0000   Sun, 28 Dec 2025 06:57:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 28 Dec 2025 06:57:06 +0000   Sun, 28 Dec 2025 06:57:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 28 Dec 2025 06:57:06 +0000   Sun, 28 Dec 2025 06:57:02 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-479871
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 493159aea3d8b8768b108b926950835d
	  System UUID:                c74e85f6-b22b-4d3f-a221-99d5faff29cc
	  Boot ID:                    e7a1d175-ccf2-4135-b9c7-3a9f70f4c4af
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-479871                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8s
	  kube-system                 kindnet-74fnf                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-479871             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-479871    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-kzkbr                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-479871             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  4s    node-controller  Node newest-cni-479871 event: Registered Node newest-cni-479871 in Controller
	
	
	==> dmesg <==
	[Dec28 06:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001811] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000998] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.386099] i8042: Warning: Keylock active
	[  +0.010472] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.485785] block sda: the capability attribute has been deprecated.
	[  +0.082391] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024584] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.071522] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 06:57:14 up 39 min,  0 user,  load average: 5.11, 3.16, 1.92
	Linux newest-cni-479871 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 28 06:57:07 newest-cni-479871 kubelet[1309]: E1228 06:57:07.613874    1309 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-479871\" already exists" pod="kube-system/etcd-newest-cni-479871"
	Dec 28 06:57:07 newest-cni-479871 kubelet[1309]: E1228 06:57:07.613961    1309 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-479871" containerName="etcd"
	Dec 28 06:57:07 newest-cni-479871 kubelet[1309]: I1228 06:57:07.638005    1309 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-479871" podStartSLOduration=1.6379862570000001 podStartE2EDuration="1.637986257s" podCreationTimestamp="2025-12-28 06:57:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-28 06:57:07.637847679 +0000 UTC m=+1.173600342" watchObservedRunningTime="2025-12-28 06:57:07.637986257 +0000 UTC m=+1.173738919"
	Dec 28 06:57:07 newest-cni-479871 kubelet[1309]: I1228 06:57:07.685171    1309 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-479871" podStartSLOduration=1.68514903 podStartE2EDuration="1.68514903s" podCreationTimestamp="2025-12-28 06:57:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-28 06:57:07.657283023 +0000 UTC m=+1.193035684" watchObservedRunningTime="2025-12-28 06:57:07.68514903 +0000 UTC m=+1.220901692"
	Dec 28 06:57:07 newest-cni-479871 kubelet[1309]: I1228 06:57:07.729767    1309 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-479871" podStartSLOduration=1.729745738 podStartE2EDuration="1.729745738s" podCreationTimestamp="2025-12-28 06:57:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-28 06:57:07.687664751 +0000 UTC m=+1.223417417" watchObservedRunningTime="2025-12-28 06:57:07.729745738 +0000 UTC m=+1.265498394"
	Dec 28 06:57:07 newest-cni-479871 kubelet[1309]: I1228 06:57:07.755308    1309 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-479871" podStartSLOduration=1.7552625339999999 podStartE2EDuration="1.755262534s" podCreationTimestamp="2025-12-28 06:57:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-28 06:57:07.73392101 +0000 UTC m=+1.269673671" watchObservedRunningTime="2025-12-28 06:57:07.755262534 +0000 UTC m=+1.291015196"
	Dec 28 06:57:08 newest-cni-479871 kubelet[1309]: E1228 06:57:08.606579    1309 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-479871" containerName="kube-apiserver"
	Dec 28 06:57:08 newest-cni-479871 kubelet[1309]: E1228 06:57:08.606665    1309 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-479871" containerName="kube-scheduler"
	Dec 28 06:57:08 newest-cni-479871 kubelet[1309]: E1228 06:57:08.606791    1309 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-479871" containerName="etcd"
	Dec 28 06:57:08 newest-cni-479871 kubelet[1309]: E1228 06:57:08.891223    1309 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-479871" containerName="kube-controller-manager"
	Dec 28 06:57:09 newest-cni-479871 kubelet[1309]: E1228 06:57:09.612893    1309 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-479871" containerName="kube-scheduler"
	Dec 28 06:57:09 newest-cni-479871 kubelet[1309]: E1228 06:57:09.613263    1309 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-479871" containerName="kube-apiserver"
	Dec 28 06:57:10 newest-cni-479871 kubelet[1309]: E1228 06:57:10.614750    1309 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-479871" containerName="kube-apiserver"
	Dec 28 06:57:10 newest-cni-479871 kubelet[1309]: I1228 06:57:10.926374    1309 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 28 06:57:10 newest-cni-479871 kubelet[1309]: I1228 06:57:10.927169    1309 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 28 06:57:12 newest-cni-479871 kubelet[1309]: I1228 06:57:12.004930    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a72ff074-7d43-4ea4-b42a-3a8e5e5fea1d-lib-modules\") pod \"kube-proxy-kzkbr\" (UID: \"a72ff074-7d43-4ea4-b42a-3a8e5e5fea1d\") " pod="kube-system/kube-proxy-kzkbr"
	Dec 28 06:57:12 newest-cni-479871 kubelet[1309]: I1228 06:57:12.004984    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f610ca19-f52f-41ef-90d7-6ae6b47445da-lib-modules\") pod \"kindnet-74fnf\" (UID: \"f610ca19-f52f-41ef-90d7-6ae6b47445da\") " pod="kube-system/kindnet-74fnf"
	Dec 28 06:57:12 newest-cni-479871 kubelet[1309]: I1228 06:57:12.005017    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h799r\" (UniqueName: \"kubernetes.io/projected/f610ca19-f52f-41ef-90d7-6ae6b47445da-kube-api-access-h799r\") pod \"kindnet-74fnf\" (UID: \"f610ca19-f52f-41ef-90d7-6ae6b47445da\") " pod="kube-system/kindnet-74fnf"
	Dec 28 06:57:12 newest-cni-479871 kubelet[1309]: I1228 06:57:12.005209    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a72ff074-7d43-4ea4-b42a-3a8e5e5fea1d-kube-proxy\") pod \"kube-proxy-kzkbr\" (UID: \"a72ff074-7d43-4ea4-b42a-3a8e5e5fea1d\") " pod="kube-system/kube-proxy-kzkbr"
	Dec 28 06:57:12 newest-cni-479871 kubelet[1309]: I1228 06:57:12.005243    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a72ff074-7d43-4ea4-b42a-3a8e5e5fea1d-xtables-lock\") pod \"kube-proxy-kzkbr\" (UID: \"a72ff074-7d43-4ea4-b42a-3a8e5e5fea1d\") " pod="kube-system/kube-proxy-kzkbr"
	Dec 28 06:57:12 newest-cni-479871 kubelet[1309]: I1228 06:57:12.005268    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f610ca19-f52f-41ef-90d7-6ae6b47445da-cni-cfg\") pod \"kindnet-74fnf\" (UID: \"f610ca19-f52f-41ef-90d7-6ae6b47445da\") " pod="kube-system/kindnet-74fnf"
	Dec 28 06:57:12 newest-cni-479871 kubelet[1309]: I1228 06:57:12.005364    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg8qh\" (UniqueName: \"kubernetes.io/projected/a72ff074-7d43-4ea4-b42a-3a8e5e5fea1d-kube-api-access-cg8qh\") pod \"kube-proxy-kzkbr\" (UID: \"a72ff074-7d43-4ea4-b42a-3a8e5e5fea1d\") " pod="kube-system/kube-proxy-kzkbr"
	Dec 28 06:57:12 newest-cni-479871 kubelet[1309]: I1228 06:57:12.005421    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f610ca19-f52f-41ef-90d7-6ae6b47445da-xtables-lock\") pod \"kindnet-74fnf\" (UID: \"f610ca19-f52f-41ef-90d7-6ae6b47445da\") " pod="kube-system/kindnet-74fnf"
	Dec 28 06:57:12 newest-cni-479871 kubelet[1309]: I1228 06:57:12.635524    1309 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-kzkbr" podStartSLOduration=1.635503916 podStartE2EDuration="1.635503916s" podCreationTimestamp="2025-12-28 06:57:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-28 06:57:12.635147073 +0000 UTC m=+6.170899735" watchObservedRunningTime="2025-12-28 06:57:12.635503916 +0000 UTC m=+6.171256579"
	Dec 28 06:57:14 newest-cni-479871 kubelet[1309]: I1228 06:57:14.642465    1309 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-74fnf" podStartSLOduration=2.300972578 podStartE2EDuration="3.642445885s" podCreationTimestamp="2025-12-28 06:57:11 +0000 UTC" firstStartedPulling="2025-12-28 06:57:12.234013891 +0000 UTC m=+5.769766550" lastFinishedPulling="2025-12-28 06:57:13.575487195 +0000 UTC m=+7.111239857" observedRunningTime="2025-12-28 06:57:14.64224509 +0000 UTC m=+8.177997752" watchObservedRunningTime="2025-12-28 06:57:14.642445885 +0000 UTC m=+8.178198570"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1228 06:57:13.964495  269206 logs.go:279] Failed to list containers for "kube-apiserver": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:13Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:14.028212  269206 logs.go:279] Failed to list containers for "etcd": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:14Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:14.108736  269206 logs.go:279] Failed to list containers for "coredns": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:14Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:14.188059  269206 logs.go:279] Failed to list containers for "kube-scheduler": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:14Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:14.254019  269206 logs.go:279] Failed to list containers for "kube-proxy": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:14Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:14.320474  269206 logs.go:279] Failed to list containers for "kube-controller-manager": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:14Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:14.387484  269206 logs.go:279] Failed to list containers for "kindnet": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:14Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:14.456781  269206 logs.go:279] Failed to list containers for "storage-provisioner": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:14Z" level=error msg="open /run/runc: no such file or directory"

                                                
                                                
** /stderr **
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-479871 -n newest-cni-479871
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-479871 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-cqtm4 storage-provisioner
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-479871 describe pod coredns-7d764666f9-cqtm4 storage-provisioner
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-479871 describe pod coredns-7d764666f9-cqtm4 storage-provisioner: exit status 1 (73.359494ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-cqtm4" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-479871 describe pod coredns-7d764666f9-cqtm4 storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-479871 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-479871 --alsologtostderr -v=1: exit status 80 (2.284253682s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-479871 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:57:38.820631  275592 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:57:38.820876  275592 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:57:38.820889  275592 out.go:374] Setting ErrFile to fd 2...
	I1228 06:57:38.820895  275592 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:57:38.821189  275592 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:57:38.821418  275592 out.go:368] Setting JSON to false
	I1228 06:57:38.821435  275592 mustload.go:66] Loading cluster: newest-cni-479871
	I1228 06:57:38.821773  275592 config.go:182] Loaded profile config "newest-cni-479871": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:57:38.822174  275592 cli_runner.go:164] Run: docker container inspect newest-cni-479871 --format={{.State.Status}}
	I1228 06:57:38.840494  275592 host.go:66] Checking if "newest-cni-479871" exists ...
	I1228 06:57:38.840724  275592 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:57:38.895701  275592 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:85 SystemTime:2025-12-28 06:57:38.885982373 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:57:38.896314  275592 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22351/minikube-v1.37.0-1766883634-22351-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766883634-22351/minikube-v1.37.0-1766883634-22351-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766883634-22351-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:newest-cni-479871 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool
=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1228 06:57:38.898306  275592 out.go:179] * Pausing node newest-cni-479871 ... 
	I1228 06:57:38.900082  275592 host.go:66] Checking if "newest-cni-479871" exists ...
	I1228 06:57:38.900382  275592 ssh_runner.go:195] Run: systemctl --version
	I1228 06:57:38.900426  275592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:57:38.918053  275592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/newest-cni-479871/id_rsa Username:docker}
	I1228 06:57:39.009313  275592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:57:39.024011  275592 pause.go:52] kubelet running: true
	I1228 06:57:39.024121  275592 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1228 06:57:39.183209  275592 ssh_runner.go:195] Run: sudo crio config
	I1228 06:57:39.244988  275592 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:57:39.257433  275592 retry.go:84] will retry after 200ms: list running: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:39Z" level=error msg="open /run/runc: no such file or directory"
	I1228 06:57:39.457767  275592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:57:39.471318  275592 pause.go:52] kubelet running: false
	I1228 06:57:39.471371  275592 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1228 06:57:39.600408  275592 ssh_runner.go:195] Run: sudo crio config
	I1228 06:57:39.660628  275592 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:57:39.882116  275592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:57:39.903514  275592 pause.go:52] kubelet running: false
	I1228 06:57:39.903675  275592 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1228 06:57:40.033009  275592 ssh_runner.go:195] Run: sudo crio config
	I1228 06:57:40.085738  275592 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:57:40.845478  275592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:57:40.858999  275592 pause.go:52] kubelet running: false
	I1228 06:57:40.859082  275592 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1228 06:57:40.969656  275592 ssh_runner.go:195] Run: sudo crio config
	I1228 06:57:41.022752  275592 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:57:41.038238  275592 out.go:203] 
	W1228 06:57:41.039519  275592 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1228 06:57:41.039538  275592 out.go:285] * 
	* 
	W1228 06:57:41.041216  275592 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1228 06:57:41.042224  275592 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-479871 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-479871
helpers_test.go:244: (dbg) docker inspect newest-cni-479871:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c33fbf6c53872baefd350c2b3a39d059949b68fa85b9a30cb0befeb93666d2b8",
	        "Created": "2025-12-28T06:56:54.089539242Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 272730,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-28T06:57:27.849721622Z",
	            "FinishedAt": "2025-12-28T06:57:26.934247109Z"
	        },
	        "Image": "sha256:8b8cccb9afb2a57c3d011fcf33e0403b1551aa7036e30b12a395646869801935",
	        "ResolvConfPath": "/var/lib/docker/containers/c33fbf6c53872baefd350c2b3a39d059949b68fa85b9a30cb0befeb93666d2b8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c33fbf6c53872baefd350c2b3a39d059949b68fa85b9a30cb0befeb93666d2b8/hostname",
	        "HostsPath": "/var/lib/docker/containers/c33fbf6c53872baefd350c2b3a39d059949b68fa85b9a30cb0befeb93666d2b8/hosts",
	        "LogPath": "/var/lib/docker/containers/c33fbf6c53872baefd350c2b3a39d059949b68fa85b9a30cb0befeb93666d2b8/c33fbf6c53872baefd350c2b3a39d059949b68fa85b9a30cb0befeb93666d2b8-json.log",
	        "Name": "/newest-cni-479871",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-479871:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-479871",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c33fbf6c53872baefd350c2b3a39d059949b68fa85b9a30cb0befeb93666d2b8",
	                "LowerDir": "/var/lib/docker/overlay2/521077bfa31a3e28dde97a8d39e5454f7bedd3f36c3b2f69239bf54eb94597b0-init/diff:/var/lib/docker/overlay2/69e554713d6cc3cb33e7ea5f93430536a8ca0db38320574d3719c26f00b2f62c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/521077bfa31a3e28dde97a8d39e5454f7bedd3f36c3b2f69239bf54eb94597b0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/521077bfa31a3e28dde97a8d39e5454f7bedd3f36c3b2f69239bf54eb94597b0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/521077bfa31a3e28dde97a8d39e5454f7bedd3f36c3b2f69239bf54eb94597b0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-479871",
	                "Source": "/var/lib/docker/volumes/newest-cni-479871/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-479871",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-479871",
	                "name.minikube.sigs.k8s.io": "newest-cni-479871",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "fe586bcaf21681d7e37e4e7c893ff8d1d4575d0df1323df30e945f5154ed01bf",
	            "SandboxKey": "/var/run/docker/netns/fe586bcaf216",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-479871": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9e93d5a2f53a18661bc1ad8ca49ab2b022bc0bcfa1a555873e6d7e016530b0cb",
	                    "EndpointID": "97ef2ce8d5fe85f46dff4e5cdd93862933dd5ee622a0ac89e41ea30a61725303",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "de:7f:c9:8b:c5:51",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-479871",
	                        "c33fbf6c5387"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-479871 -n newest-cni-479871
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-479871 -n newest-cni-479871: exit status 2 (345.010471ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-479871 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-479871 logs -n 25: (1.088924463s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p no-preload-950460 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ addons  │ enable metrics-server -p embed-certs-422591 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	│ stop    │ -p embed-certs-422591 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-500581 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-500581 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ image   │ old-k8s-version-694122 image list --format=json                                                                                                                                                                                               │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ pause   │ -p old-k8s-version-694122 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	│ delete  │ -p old-k8s-version-694122                                                                                                                                                                                                                     │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ addons  │ enable dashboard -p embed-certs-422591 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ start   │ -p embed-certs-422591 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:57 UTC │
	│ delete  │ -p old-k8s-version-694122                                                                                                                                                                                                                     │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ start   │ -p newest-cni-479871 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:57 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-500581 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ start   │ -p default-k8s-diff-port-500581 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:57 UTC │
	│ image   │ no-preload-950460 image list --format=json                                                                                                                                                                                                    │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ pause   │ -p no-preload-950460 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-479871 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │                     │
	│ stop    │ -p newest-cni-479871 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ delete  │ -p no-preload-950460                                                                                                                                                                                                                          │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ delete  │ -p no-preload-950460                                                                                                                                                                                                                          │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ start   │ -p auto-610916 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-610916                  │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-479871 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ start   │ -p newest-cni-479871 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ image   │ newest-cni-479871 image list --format=json                                                                                                                                                                                                    │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ pause   │ -p newest-cni-479871 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 06:57:27
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 06:57:27.608626  272455 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:57:27.608891  272455 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:57:27.608901  272455 out.go:374] Setting ErrFile to fd 2...
	I1228 06:57:27.608905  272455 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:57:27.609095  272455 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:57:27.609520  272455 out.go:368] Setting JSON to false
	I1228 06:57:27.610841  272455 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2400,"bootTime":1766902648,"procs":512,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1228 06:57:27.610899  272455 start.go:143] virtualization: kvm guest
	I1228 06:57:27.613002  272455 out.go:179] * [newest-cni-479871] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1228 06:57:27.614314  272455 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 06:57:27.614309  272455 notify.go:221] Checking for updates...
	I1228 06:57:27.615685  272455 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 06:57:27.617024  272455 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:57:27.618436  272455 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-5550/.minikube
	I1228 06:57:27.619651  272455 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1228 06:57:27.620801  272455 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 06:57:27.622394  272455 config.go:182] Loaded profile config "newest-cni-479871": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:57:27.622871  272455 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 06:57:27.649730  272455 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1228 06:57:27.649858  272455 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:57:27.711778  272455 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-28 06:57:27.701041109 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:57:27.711909  272455 docker.go:319] overlay module found
	I1228 06:57:27.714695  272455 out.go:179] * Using the docker driver based on existing profile
	I1228 06:57:27.715814  272455 start.go:309] selected driver: docker
	I1228 06:57:27.715827  272455 start.go:928] validating driver "docker" against &{Name:newest-cni-479871 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-479871 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:57:27.715906  272455 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 06:57:27.716544  272455 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:57:27.774275  272455 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-28 06:57:27.762678993 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:57:27.774595  272455 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1228 06:57:27.774629  272455 cni.go:84] Creating CNI manager for ""
	I1228 06:57:27.774690  272455 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1228 06:57:27.774727  272455 start.go:353] cluster config:
	{Name:newest-cni-479871 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-479871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:57:27.776533  272455 out.go:179] * Starting "newest-cni-479871" primary control-plane node in "newest-cni-479871" cluster
	I1228 06:57:27.777655  272455 cache.go:134] Beginning downloading kic base image for docker with crio
	I1228 06:57:27.778822  272455 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 06:57:27.779911  272455 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1228 06:57:27.779958  272455 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 06:57:27.779956  272455 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22352-5550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1228 06:57:27.780007  272455 cache.go:65] Caching tarball of preloaded images
	I1228 06:57:27.780266  272455 preload.go:251] Found /home/jenkins/minikube-integration/22352-5550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1228 06:57:27.780296  272455 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1228 06:57:27.780460  272455 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/config.json ...
	I1228 06:57:27.801449  272455 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 06:57:27.801472  272455 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
	I1228 06:57:27.801491  272455 cache.go:243] Successfully downloaded all kic artifacts
	I1228 06:57:27.801528  272455 start.go:360] acquireMachinesLock for newest-cni-479871: {Name:mk0ffa1f12460094192ce711dd360a9389869f0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:57:27.801591  272455 start.go:364] duration metric: took 40.857µs to acquireMachinesLock for "newest-cni-479871"
	I1228 06:57:27.801613  272455 start.go:96] Skipping create...Using existing machine configuration
	I1228 06:57:27.801619  272455 fix.go:54] fixHost starting: 
	I1228 06:57:27.801874  272455 cli_runner.go:164] Run: docker container inspect newest-cni-479871 --format={{.State.Status}}
	I1228 06:57:27.820884  272455 fix.go:112] recreateIfNeeded on newest-cni-479871: state=Stopped err=<nil>
	W1228 06:57:27.820925  272455 fix.go:138] unexpected machine state, will restart: <nil>
	I1228 06:57:24.037107  270987 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22352-5550/.minikube/machines/auto-610916/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1228 06:57:24.066848  270987 cli_runner.go:164] Run: docker container inspect auto-610916 --format={{.State.Status}}
	I1228 06:57:24.086253  270987 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1228 06:57:24.086272  270987 kic_runner.go:114] Args: [docker exec --privileged auto-610916 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1228 06:57:24.132042  270987 cli_runner.go:164] Run: docker container inspect auto-610916 --format={{.State.Status}}
	I1228 06:57:24.155369  270987 machine.go:94] provisionDockerMachine start ...
	I1228 06:57:24.155446  270987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-610916
	I1228 06:57:24.175970  270987 main.go:144] libmachine: Using SSH client type: native
	I1228 06:57:24.176342  270987 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1228 06:57:24.176366  270987 main.go:144] libmachine: About to run SSH command:
	hostname
	I1228 06:57:24.177174  270987 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33426->127.0.0.1:33098: read: connection reset by peer
	I1228 06:57:27.305442  270987 main.go:144] libmachine: SSH cmd err, output: <nil>: auto-610916
	
	I1228 06:57:27.305473  270987 ubuntu.go:182] provisioning hostname "auto-610916"
	I1228 06:57:27.305525  270987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-610916
	I1228 06:57:27.325728  270987 main.go:144] libmachine: Using SSH client type: native
	I1228 06:57:27.325955  270987 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1228 06:57:27.325974  270987 main.go:144] libmachine: About to run SSH command:
	sudo hostname auto-610916 && echo "auto-610916" | sudo tee /etc/hostname
	I1228 06:57:27.465700  270987 main.go:144] libmachine: SSH cmd err, output: <nil>: auto-610916
	
	I1228 06:57:27.465786  270987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-610916
	I1228 06:57:27.485558  270987 main.go:144] libmachine: Using SSH client type: native
	I1228 06:57:27.485776  270987 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1228 06:57:27.485793  270987 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-610916' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-610916/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-610916' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1228 06:57:27.614741  270987 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1228 06:57:27.614772  270987 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22352-5550/.minikube CaCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22352-5550/.minikube}
	I1228 06:57:27.614794  270987 ubuntu.go:190] setting up certificates
	I1228 06:57:27.614809  270987 provision.go:84] configureAuth start
	I1228 06:57:27.614857  270987 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-610916
	I1228 06:57:27.636073  270987 provision.go:143] copyHostCerts
	I1228 06:57:27.636153  270987 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem, removing ...
	I1228 06:57:27.636170  270987 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem
	I1228 06:57:27.636258  270987 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem (1082 bytes)
	I1228 06:57:27.636381  270987 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem, removing ...
	I1228 06:57:27.636394  270987 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem
	I1228 06:57:27.636443  270987 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem (1123 bytes)
	I1228 06:57:27.636533  270987 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem, removing ...
	I1228 06:57:27.636544  270987 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem
	I1228 06:57:27.636587  270987 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem (1679 bytes)
	I1228 06:57:27.636670  270987 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem org=jenkins.auto-610916 san=[127.0.0.1 192.168.94.2 auto-610916 localhost minikube]
	I1228 06:57:27.665458  270987 provision.go:177] copyRemoteCerts
	I1228 06:57:27.665531  270987 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1228 06:57:27.665584  270987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-610916
	I1228 06:57:27.690153  270987 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/auto-610916/id_rsa Username:docker}
	I1228 06:57:27.787684  270987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1228 06:57:27.807992  270987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1228 06:57:27.826888  270987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1228 06:57:27.846707  270987 provision.go:87] duration metric: took 231.883622ms to configureAuth
	I1228 06:57:27.846738  270987 ubuntu.go:206] setting minikube options for container-runtime
	I1228 06:57:27.846951  270987 config.go:182] Loaded profile config "auto-610916": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:57:27.847088  270987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-610916
	I1228 06:57:27.867779  270987 main.go:144] libmachine: Using SSH client type: native
	I1228 06:57:27.868099  270987 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1228 06:57:27.868129  270987 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1228 06:57:28.172836  270987 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1228 06:57:28.172866  270987 machine.go:97] duration metric: took 4.017476194s to provisionDockerMachine
	I1228 06:57:28.172880  270987 client.go:176] duration metric: took 8.942276035s to LocalClient.Create
	I1228 06:57:28.172906  270987 start.go:167] duration metric: took 8.942337637s to libmachine.API.Create "auto-610916"
	I1228 06:57:28.172920  270987 start.go:293] postStartSetup for "auto-610916" (driver="docker")
	I1228 06:57:28.172932  270987 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1228 06:57:28.173050  270987 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1228 06:57:28.173126  270987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-610916
	I1228 06:57:28.193062  270987 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/auto-610916/id_rsa Username:docker}
	I1228 06:57:28.298659  270987 ssh_runner.go:195] Run: cat /etc/os-release
	I1228 06:57:28.302677  270987 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1228 06:57:28.302714  270987 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1228 06:57:28.302728  270987 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-5550/.minikube/addons for local assets ...
	I1228 06:57:28.302789  270987 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-5550/.minikube/files for local assets ...
	I1228 06:57:28.302894  270987 filesync.go:149] local asset: /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem -> 90762.pem in /etc/ssl/certs
	I1228 06:57:28.303043  270987 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1228 06:57:28.311353  270987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem --> /etc/ssl/certs/90762.pem (1708 bytes)
	I1228 06:57:28.335660  270987 start.go:296] duration metric: took 162.723362ms for postStartSetup
	I1228 06:57:28.336202  270987 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-610916
	I1228 06:57:28.358542  270987 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/config.json ...
	I1228 06:57:28.358812  270987 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 06:57:28.358865  270987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-610916
	I1228 06:57:28.378973  270987 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/auto-610916/id_rsa Username:docker}
	I1228 06:57:28.477523  270987 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1228 06:57:28.482381  270987 start.go:128] duration metric: took 9.253964147s to createHost
	I1228 06:57:28.482404  270987 start.go:83] releasing machines lock for "auto-610916", held for 9.254082575s
	I1228 06:57:28.482463  270987 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-610916
	I1228 06:57:28.500103  270987 ssh_runner.go:195] Run: cat /version.json
	I1228 06:57:28.500154  270987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-610916
	I1228 06:57:28.500204  270987 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1228 06:57:28.500271  270987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-610916
	I1228 06:57:28.518391  270987 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/auto-610916/id_rsa Username:docker}
	I1228 06:57:28.518847  270987 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/auto-610916/id_rsa Username:docker}
	I1228 06:57:28.660774  270987 ssh_runner.go:195] Run: systemctl --version
	I1228 06:57:28.668879  270987 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1228 06:57:28.704105  270987 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1228 06:57:28.708577  270987 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1228 06:57:28.708639  270987 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1228 06:57:28.735158  270987 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1228 06:57:28.735181  270987 start.go:496] detecting cgroup driver to use...
	I1228 06:57:28.735212  270987 detect.go:190] detected "systemd" cgroup driver on host os
	I1228 06:57:28.735254  270987 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1228 06:57:28.750799  270987 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1228 06:57:28.763144  270987 docker.go:218] disabling cri-docker service (if available) ...
	I1228 06:57:28.763190  270987 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1228 06:57:28.780178  270987 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1228 06:57:28.796822  270987 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1228 06:57:28.878184  270987 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1228 06:57:28.967161  270987 docker.go:234] disabling docker service ...
	I1228 06:57:28.967217  270987 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1228 06:57:28.986533  270987 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1228 06:57:28.999718  270987 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	W1228 06:57:26.134734  260283 pod_ready.go:104] pod "coredns-7d764666f9-dmhdv" is not "Ready", error: <nil>
	W1228 06:57:28.135917  260283 pod_ready.go:104] pod "coredns-7d764666f9-dmhdv" is not "Ready", error: <nil>
	I1228 06:57:29.087789  270987 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1228 06:57:29.174616  270987 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1228 06:57:29.186986  270987 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 06:57:29.200989  270987 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1228 06:57:29.201053  270987 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:57:29.210957  270987 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1228 06:57:29.211010  270987 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:57:29.219703  270987 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:57:29.227840  270987 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:57:29.236513  270987 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1228 06:57:29.244294  270987 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:57:29.252649  270987 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:57:29.266492  270987 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:57:29.275948  270987 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1228 06:57:29.283764  270987 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1228 06:57:29.291555  270987 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:57:29.369411  270987 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1228 06:57:29.504432  270987 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1228 06:57:29.504495  270987 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1228 06:57:29.508431  270987 start.go:574] Will wait 60s for crictl version
	I1228 06:57:29.508476  270987 ssh_runner.go:195] Run: which crictl
	I1228 06:57:29.511936  270987 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1228 06:57:29.535964  270987 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1228 06:57:29.536070  270987 ssh_runner.go:195] Run: crio --version
	I1228 06:57:29.564818  270987 ssh_runner.go:195] Run: crio --version
	I1228 06:57:29.593580  270987 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1228 06:57:29.594684  270987 cli_runner.go:164] Run: docker network inspect auto-610916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 06:57:29.611580  270987 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1228 06:57:29.615568  270987 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 06:57:29.626319  270987 kubeadm.go:884] updating cluster {Name:auto-610916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-610916 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 06:57:29.626423  270987 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1228 06:57:29.626465  270987 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 06:57:29.661197  270987 crio.go:631] all images are preloaded for cri-o runtime.
	I1228 06:57:29.661223  270987 crio.go:503] Images already preloaded, skipping extraction
	I1228 06:57:29.661276  270987 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 06:57:29.687846  270987 crio.go:631] all images are preloaded for cri-o runtime.
	I1228 06:57:29.687866  270987 cache_images.go:86] Images are preloaded, skipping loading
	I1228 06:57:29.687873  270987 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0 crio true true} ...
	I1228 06:57:29.687961  270987 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-610916 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:auto-610916 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 06:57:29.688049  270987 ssh_runner.go:195] Run: crio config
	I1228 06:57:29.733607  270987 cni.go:84] Creating CNI manager for ""
	I1228 06:57:29.733628  270987 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1228 06:57:29.733642  270987 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1228 06:57:29.733663  270987 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-610916 NodeName:auto-610916 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 06:57:29.733768  270987 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-610916"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 06:57:29.733822  270987 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1228 06:57:29.742276  270987 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 06:57:29.742348  270987 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 06:57:29.750012  270987 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1228 06:57:29.762831  270987 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 06:57:29.778755  270987 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2207 bytes)
	I1228 06:57:29.791627  270987 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1228 06:57:29.795195  270987 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 06:57:29.804912  270987 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:57:29.886268  270987 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:57:29.916647  270987 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916 for IP: 192.168.94.2
	I1228 06:57:29.916668  270987 certs.go:195] generating shared ca certs ...
	I1228 06:57:29.916685  270987 certs.go:227] acquiring lock for ca certs: {Name:mk77ee411d20e2d367f536371cb4debf1ce5f664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:57:29.916841  270987 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-5550/.minikube/ca.key
	I1228 06:57:29.916901  270987 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.key
	I1228 06:57:29.916926  270987 certs.go:257] generating profile certs ...
	I1228 06:57:29.916992  270987 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/client.key
	I1228 06:57:29.917012  270987 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/client.crt with IP's: []
	I1228 06:57:30.044926  270987 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/client.crt ...
	I1228 06:57:30.044958  270987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/client.crt: {Name:mk1312f0b2a2032c49b00ad683907b4a293451e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:57:30.045195  270987 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/client.key ...
	I1228 06:57:30.045216  270987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/client.key: {Name:mk493068c73b3fffa7862dab2db2e5af2ad4cdf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:57:30.045363  270987 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/apiserver.key.fdf6da28
	I1228 06:57:30.045389  270987 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/apiserver.crt.fdf6da28 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1228 06:57:30.158189  270987 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/apiserver.crt.fdf6da28 ...
	I1228 06:57:30.158218  270987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/apiserver.crt.fdf6da28: {Name:mk504cc05bb8ae655c586e0de3d0329e16c65b75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:57:30.158421  270987 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/apiserver.key.fdf6da28 ...
	I1228 06:57:30.158440  270987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/apiserver.key.fdf6da28: {Name:mk9ef5b5525f742323f13a2a32ffa9eba64f0142 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:57:30.158555  270987 certs.go:382] copying /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/apiserver.crt.fdf6da28 -> /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/apiserver.crt
	I1228 06:57:30.158659  270987 certs.go:386] copying /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/apiserver.key.fdf6da28 -> /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/apiserver.key
	I1228 06:57:30.158741  270987 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/proxy-client.key
	I1228 06:57:30.158759  270987 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/proxy-client.crt with IP's: []
	I1228 06:57:30.270524  270987 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/proxy-client.crt ...
	I1228 06:57:30.270554  270987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/proxy-client.crt: {Name:mka63596e1271c706e6b7eac62cdc7cc3ca4865f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:57:30.270721  270987 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/proxy-client.key ...
	I1228 06:57:30.270744  270987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/proxy-client.key: {Name:mkf21e3e4aab416c5d1d32fdb4276fe2f2f42020 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:57:30.270957  270987 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076.pem (1338 bytes)
	W1228 06:57:30.271003  270987 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076_empty.pem, impossibly tiny 0 bytes
	I1228 06:57:30.271013  270987 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem (1675 bytes)
	I1228 06:57:30.271055  270987 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem (1082 bytes)
	I1228 06:57:30.271090  270987 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem (1123 bytes)
	I1228 06:57:30.271114  270987 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem (1679 bytes)
	I1228 06:57:30.271159  270987 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem (1708 bytes)
	I1228 06:57:30.271792  270987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 06:57:30.294411  270987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1228 06:57:30.313499  270987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 06:57:30.331456  270987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 06:57:30.348879  270987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1228 06:57:30.366165  270987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1228 06:57:30.383872  270987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 06:57:30.400633  270987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1228 06:57:30.417868  270987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem --> /usr/share/ca-certificates/90762.pem (1708 bytes)
	I1228 06:57:30.436915  270987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 06:57:30.454672  270987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076.pem --> /usr/share/ca-certificates/9076.pem (1338 bytes)
	I1228 06:57:30.471864  270987 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 06:57:30.484411  270987 ssh_runner.go:195] Run: openssl version
	I1228 06:57:30.490518  270987 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/90762.pem
	I1228 06:57:30.498111  270987 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/90762.pem /etc/ssl/certs/90762.pem
	I1228 06:57:30.505463  270987 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/90762.pem
	I1228 06:57:30.509114  270987 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:31 /usr/share/ca-certificates/90762.pem
	I1228 06:57:30.509169  270987 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/90762.pem
	I1228 06:57:30.544782  270987 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 06:57:30.552986  270987 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/90762.pem /etc/ssl/certs/3ec20f2e.0
	I1228 06:57:30.560283  270987 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:57:30.567532  270987 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 06:57:30.575539  270987 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:57:30.579221  270987 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:28 /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:57:30.579276  270987 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:57:30.613791  270987 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 06:57:30.621616  270987 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1228 06:57:30.629144  270987 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9076.pem
	I1228 06:57:30.636839  270987 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9076.pem /etc/ssl/certs/9076.pem
	I1228 06:57:30.644526  270987 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9076.pem
	I1228 06:57:30.648439  270987 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:31 /usr/share/ca-certificates/9076.pem
	I1228 06:57:30.648495  270987 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9076.pem
	I1228 06:57:30.687718  270987 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 06:57:30.695733  270987 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9076.pem /etc/ssl/certs/51391683.0
	I1228 06:57:30.703073  270987 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 06:57:30.706590  270987 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1228 06:57:30.706638  270987 kubeadm.go:401] StartCluster: {Name:auto-610916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-610916 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:57:30.706712  270987 ssh_runner.go:195] Run: sudo crio config
	I1228 06:57:30.753850  270987 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	W1228 06:57:30.765149  270987 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:30Z" level=error msg="open /run/runc: no such file or directory"
	I1228 06:57:30.765231  270987 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 06:57:30.772721  270987 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1228 06:57:30.780189  270987 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1228 06:57:30.780230  270987 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1228 06:57:30.787573  270987 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1228 06:57:30.787587  270987 kubeadm.go:158] found existing configuration files:
	
	I1228 06:57:30.787634  270987 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1228 06:57:30.795410  270987 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1228 06:57:30.795458  270987 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1228 06:57:30.802735  270987 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1228 06:57:30.809751  270987 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1228 06:57:30.809788  270987 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1228 06:57:30.816643  270987 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1228 06:57:30.823688  270987 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1228 06:57:30.823727  270987 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1228 06:57:30.830609  270987 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1228 06:57:30.837629  270987 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1228 06:57:30.837665  270987 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1228 06:57:30.844490  270987 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1228 06:57:30.881554  270987 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1228 06:57:30.881622  270987 kubeadm.go:319] [preflight] Running pre-flight checks
	I1228 06:57:30.944791  270987 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1228 06:57:30.944862  270987 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1228 06:57:30.944905  270987 kubeadm.go:319] OS: Linux
	I1228 06:57:30.944984  270987 kubeadm.go:319] CGROUPS_CPU: enabled
	I1228 06:57:30.945077  270987 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1228 06:57:30.945147  270987 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1228 06:57:30.945228  270987 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1228 06:57:30.945296  270987 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1228 06:57:30.945361  270987 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1228 06:57:30.945425  270987 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1228 06:57:30.945497  270987 kubeadm.go:319] CGROUPS_IO: enabled
	I1228 06:57:31.003363  270987 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1228 06:57:31.003533  270987 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1228 06:57:31.003696  270987 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1228 06:57:31.010847  270987 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1228 06:57:27.671633  261568 pod_ready.go:104] pod "coredns-7d764666f9-9glh9" is not "Ready", error: <nil>
	W1228 06:57:30.169012  261568 pod_ready.go:104] pod "coredns-7d764666f9-9glh9" is not "Ready", error: <nil>
	I1228 06:57:31.013041  270987 out.go:252]   - Generating certificates and keys ...
	I1228 06:57:31.013115  270987 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1228 06:57:31.013217  270987 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1228 06:57:31.076574  270987 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1228 06:57:31.152713  270987 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1228 06:57:31.254971  270987 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1228 06:57:31.392250  270987 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1228 06:57:31.425796  270987 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1228 06:57:31.426013  270987 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-610916 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1228 06:57:31.498834  270987 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1228 06:57:31.499021  270987 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-610916 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1228 06:57:31.603128  270987 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1228 06:57:31.639904  270987 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1228 06:57:31.740867  270987 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1228 06:57:31.740970  270987 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1228 06:57:31.873393  270987 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1228 06:57:31.907883  270987 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1228 06:57:31.989991  270987 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1228 06:57:32.036264  270987 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1228 06:57:32.114999  270987 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1228 06:57:32.115482  270987 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1228 06:57:32.121217  270987 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1228 06:57:27.822983  272455 out.go:252] * Restarting existing docker container for "newest-cni-479871" ...
	I1228 06:57:27.823078  272455 cli_runner.go:164] Run: docker start newest-cni-479871
	I1228 06:57:28.077460  272455 cli_runner.go:164] Run: docker container inspect newest-cni-479871 --format={{.State.Status}}
	I1228 06:57:28.098743  272455 kic.go:430] container "newest-cni-479871" state is running.
	I1228 06:57:28.099228  272455 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-479871
	I1228 06:57:28.120367  272455 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/config.json ...
	I1228 06:57:28.120589  272455 machine.go:94] provisionDockerMachine start ...
	I1228 06:57:28.120661  272455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:57:28.143624  272455 main.go:144] libmachine: Using SSH client type: native
	I1228 06:57:28.143953  272455 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1228 06:57:28.143971  272455 main.go:144] libmachine: About to run SSH command:
	hostname
	I1228 06:57:28.144632  272455 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50026->127.0.0.1:33103: read: connection reset by peer
	I1228 06:57:31.270069  272455 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-479871
	
	I1228 06:57:31.270100  272455 ubuntu.go:182] provisioning hostname "newest-cni-479871"
	I1228 06:57:31.270163  272455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:57:31.289699  272455 main.go:144] libmachine: Using SSH client type: native
	I1228 06:57:31.289926  272455 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1228 06:57:31.289941  272455 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-479871 && echo "newest-cni-479871" | sudo tee /etc/hostname
	I1228 06:57:31.426879  272455 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-479871
	
	I1228 06:57:31.426967  272455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:57:31.445496  272455 main.go:144] libmachine: Using SSH client type: native
	I1228 06:57:31.445747  272455 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1228 06:57:31.445765  272455 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-479871' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-479871/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-479871' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1228 06:57:31.567877  272455 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1228 06:57:31.567905  272455 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22352-5550/.minikube CaCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22352-5550/.minikube}
	I1228 06:57:31.567941  272455 ubuntu.go:190] setting up certificates
	I1228 06:57:31.567959  272455 provision.go:84] configureAuth start
	I1228 06:57:31.568049  272455 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-479871
	I1228 06:57:31.588447  272455 provision.go:143] copyHostCerts
	I1228 06:57:31.588510  272455 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem, removing ...
	I1228 06:57:31.588533  272455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem
	I1228 06:57:31.588582  272455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem (1123 bytes)
	I1228 06:57:31.588665  272455 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem, removing ...
	I1228 06:57:31.588674  272455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem
	I1228 06:57:31.588700  272455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem (1679 bytes)
	I1228 06:57:31.588751  272455 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem, removing ...
	I1228 06:57:31.588758  272455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem
	I1228 06:57:31.588780  272455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem (1082 bytes)
	I1228 06:57:31.588830  272455 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem org=jenkins.newest-cni-479871 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-479871]
	I1228 06:57:31.739310  272455 provision.go:177] copyRemoteCerts
	I1228 06:57:31.739389  272455 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1228 06:57:31.739435  272455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:57:31.758636  272455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/newest-cni-479871/id_rsa Username:docker}
	I1228 06:57:31.850505  272455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1228 06:57:31.868636  272455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1228 06:57:31.887288  272455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1228 06:57:31.904303  272455 provision.go:87] duration metric: took 336.319162ms to configureAuth
	I1228 06:57:31.904329  272455 ubuntu.go:206] setting minikube options for container-runtime
	I1228 06:57:31.904536  272455 config.go:182] Loaded profile config "newest-cni-479871": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:57:31.904641  272455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:57:31.922743  272455 main.go:144] libmachine: Using SSH client type: native
	I1228 06:57:31.922960  272455 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1228 06:57:31.922975  272455 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1228 06:57:32.255603  272455 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1228 06:57:32.255627  272455 machine.go:97] duration metric: took 4.135020669s to provisionDockerMachine
	I1228 06:57:32.255641  272455 start.go:293] postStartSetup for "newest-cni-479871" (driver="docker")
	I1228 06:57:32.255654  272455 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1228 06:57:32.255713  272455 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1228 06:57:32.255760  272455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:57:32.276512  272455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/newest-cni-479871/id_rsa Username:docker}
	I1228 06:57:32.368440  272455 ssh_runner.go:195] Run: cat /etc/os-release
	I1228 06:57:32.372197  272455 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1228 06:57:32.372226  272455 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1228 06:57:32.372237  272455 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-5550/.minikube/addons for local assets ...
	I1228 06:57:32.372290  272455 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-5550/.minikube/files for local assets ...
	I1228 06:57:32.372375  272455 filesync.go:149] local asset: /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem -> 90762.pem in /etc/ssl/certs
	I1228 06:57:32.372463  272455 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1228 06:57:32.381338  272455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem --> /etc/ssl/certs/90762.pem (1708 bytes)
	I1228 06:57:32.401589  272455 start.go:296] duration metric: took 145.932701ms for postStartSetup
	I1228 06:57:32.401683  272455 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 06:57:32.401737  272455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:57:32.422931  272455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/newest-cni-479871/id_rsa Username:docker}
	I1228 06:57:32.512401  272455 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1228 06:57:32.517023  272455 fix.go:56] duration metric: took 4.7153984s for fixHost
	I1228 06:57:32.517069  272455 start.go:83] releasing machines lock for "newest-cni-479871", held for 4.715464471s
	I1228 06:57:32.517153  272455 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-479871
	I1228 06:57:32.535177  272455 ssh_runner.go:195] Run: cat /version.json
	I1228 06:57:32.535227  272455 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1228 06:57:32.535244  272455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:57:32.535318  272455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:57:32.554418  272455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/newest-cni-479871/id_rsa Username:docker}
	I1228 06:57:32.554751  272455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/newest-cni-479871/id_rsa Username:docker}
	I1228 06:57:32.645792  272455 ssh_runner.go:195] Run: systemctl --version
	I1228 06:57:32.700940  272455 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1228 06:57:32.734908  272455 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1228 06:57:32.739648  272455 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1228 06:57:32.739716  272455 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1228 06:57:32.747404  272455 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1228 06:57:32.747422  272455 start.go:496] detecting cgroup driver to use...
	I1228 06:57:32.747453  272455 detect.go:190] detected "systemd" cgroup driver on host os
	I1228 06:57:32.747508  272455 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1228 06:57:32.762371  272455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1228 06:57:32.776108  272455 docker.go:218] disabling cri-docker service (if available) ...
	I1228 06:57:32.776178  272455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1228 06:57:32.790257  272455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1228 06:57:32.801901  272455 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1228 06:57:32.882973  272455 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1228 06:57:32.963675  272455 docker.go:234] disabling docker service ...
	I1228 06:57:32.963729  272455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1228 06:57:32.978111  272455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1228 06:57:32.990320  272455 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1228 06:57:33.068597  272455 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1228 06:57:33.166992  272455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1228 06:57:33.185863  272455 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 06:57:33.206306  272455 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1228 06:57:33.206387  272455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:57:33.217732  272455 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1228 06:57:33.217804  272455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:57:33.228626  272455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:57:33.240270  272455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:57:33.251994  272455 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1228 06:57:33.260712  272455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:57:33.269769  272455 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:57:33.278664  272455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:57:33.287267  272455 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1228 06:57:33.294325  272455 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1228 06:57:33.301312  272455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:57:33.390081  272455 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1228 06:57:33.553715  272455 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1228 06:57:33.553780  272455 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1228 06:57:33.557765  272455 start.go:574] Will wait 60s for crictl version
	I1228 06:57:33.557821  272455 ssh_runner.go:195] Run: which crictl
	I1228 06:57:33.561315  272455 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1228 06:57:33.586144  272455 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1228 06:57:33.586231  272455 ssh_runner.go:195] Run: crio --version
	I1228 06:57:33.615740  272455 ssh_runner.go:195] Run: crio --version
	I1228 06:57:33.657346  272455 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1228 06:57:33.658662  272455 cli_runner.go:164] Run: docker network inspect newest-cni-479871 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 06:57:33.680168  272455 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1228 06:57:33.684562  272455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 06:57:33.696730  272455 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1228 06:57:32.122847  270987 out.go:252]   - Booting up control plane ...
	I1228 06:57:32.122976  270987 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1228 06:57:32.123098  270987 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1228 06:57:32.123841  270987 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1228 06:57:32.150283  270987 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1228 06:57:32.150450  270987 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1228 06:57:32.157224  270987 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1228 06:57:32.157429  270987 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1228 06:57:32.157489  270987 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1228 06:57:32.266779  270987 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1228 06:57:32.266950  270987 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1228 06:57:32.768514  270987 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.843367ms
	I1228 06:57:32.773307  270987 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1228 06:57:32.773443  270987 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1228 06:57:32.773570  270987 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1228 06:57:32.773668  270987 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1228 06:57:33.777587  270987 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004224014s
	W1228 06:57:30.634896  260283 pod_ready.go:104] pod "coredns-7d764666f9-dmhdv" is not "Ready", error: <nil>
	W1228 06:57:32.635515  260283 pod_ready.go:104] pod "coredns-7d764666f9-dmhdv" is not "Ready", error: <nil>
	I1228 06:57:33.697864  272455 kubeadm.go:884] updating cluster {Name:newest-cni-479871 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-479871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 06:57:33.698023  272455 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1228 06:57:33.698207  272455 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 06:57:33.738334  272455 crio.go:631] all images are preloaded for cri-o runtime.
	I1228 06:57:33.738354  272455 crio.go:503] Images already preloaded, skipping extraction
	I1228 06:57:33.738394  272455 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 06:57:33.769247  272455 crio.go:631] all images are preloaded for cri-o runtime.
	I1228 06:57:33.769274  272455 cache_images.go:86] Images are preloaded, skipping loading
	I1228 06:57:33.769284  272455 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I1228 06:57:33.769403  272455 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-479871 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-479871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 06:57:33.769491  272455 ssh_runner.go:195] Run: crio config
	I1228 06:57:33.819410  272455 cni.go:84] Creating CNI manager for ""
	I1228 06:57:33.819443  272455 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1228 06:57:33.819463  272455 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1228 06:57:33.819495  272455 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-479871 NodeName:newest-cni-479871 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 06:57:33.819646  272455 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-479871"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 06:57:33.819717  272455 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1228 06:57:33.829016  272455 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 06:57:33.829101  272455 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 06:57:33.837816  272455 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1228 06:57:33.853961  272455 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 06:57:33.868483  272455 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1228 06:57:33.882729  272455 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1228 06:57:33.886671  272455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 06:57:33.897459  272455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:57:33.989500  272455 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:57:34.014562  272455 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871 for IP: 192.168.85.2
	I1228 06:57:34.014584  272455 certs.go:195] generating shared ca certs ...
	I1228 06:57:34.014601  272455 certs.go:227] acquiring lock for ca certs: {Name:mk77ee411d20e2d367f536371cb4debf1ce5f664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:57:34.014768  272455 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-5550/.minikube/ca.key
	I1228 06:57:34.014823  272455 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.key
	I1228 06:57:34.014837  272455 certs.go:257] generating profile certs ...
	I1228 06:57:34.014938  272455 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/client.key
	I1228 06:57:34.015009  272455 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/apiserver.key.37bd9581
	I1228 06:57:34.015080  272455 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/proxy-client.key
	I1228 06:57:34.015244  272455 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076.pem (1338 bytes)
	W1228 06:57:34.015289  272455 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076_empty.pem, impossibly tiny 0 bytes
	I1228 06:57:34.015304  272455 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem (1675 bytes)
	I1228 06:57:34.015342  272455 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem (1082 bytes)
	I1228 06:57:34.015381  272455 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem (1123 bytes)
	I1228 06:57:34.015416  272455 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem (1679 bytes)
	I1228 06:57:34.015484  272455 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem (1708 bytes)
	I1228 06:57:34.016185  272455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 06:57:34.037890  272455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1228 06:57:34.058760  272455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 06:57:34.081701  272455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 06:57:34.108478  272455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1228 06:57:34.130660  272455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1228 06:57:34.152216  272455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 06:57:34.170684  272455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1228 06:57:34.189183  272455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 06:57:34.206606  272455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076.pem --> /usr/share/ca-certificates/9076.pem (1338 bytes)
	I1228 06:57:34.229125  272455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem --> /usr/share/ca-certificates/90762.pem (1708 bytes)
	I1228 06:57:34.246672  272455 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 06:57:34.259394  272455 ssh_runner.go:195] Run: openssl version
	I1228 06:57:34.265808  272455 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:57:34.274413  272455 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 06:57:34.282379  272455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:57:34.286174  272455 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:28 /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:57:34.286234  272455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:57:34.327279  272455 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 06:57:34.335837  272455 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9076.pem
	I1228 06:57:34.344221  272455 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9076.pem /etc/ssl/certs/9076.pem
	I1228 06:57:34.352006  272455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9076.pem
	I1228 06:57:34.355742  272455 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:31 /usr/share/ca-certificates/9076.pem
	I1228 06:57:34.355798  272455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9076.pem
	I1228 06:57:34.390265  272455 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 06:57:34.398050  272455 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/90762.pem
	I1228 06:57:34.405829  272455 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/90762.pem /etc/ssl/certs/90762.pem
	I1228 06:57:34.414747  272455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/90762.pem
	I1228 06:57:34.425769  272455 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:31 /usr/share/ca-certificates/90762.pem
	I1228 06:57:34.425834  272455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/90762.pem
	I1228 06:57:34.461046  272455 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 06:57:34.469275  272455 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 06:57:34.477451  272455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1228 06:57:34.528430  272455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1228 06:57:34.593309  272455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1228 06:57:34.652783  272455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1228 06:57:34.711443  272455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1228 06:57:34.777093  272455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1228 06:57:34.821043  272455 kubeadm.go:401] StartCluster: {Name:newest-cni-479871 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-479871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:57:34.821189  272455 ssh_runner.go:195] Run: sudo crio config
	I1228 06:57:34.873329  272455 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	W1228 06:57:34.885432  272455 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:34Z" level=error msg="open /run/runc: no such file or directory"
	I1228 06:57:34.885501  272455 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 06:57:34.893258  272455 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1228 06:57:34.893278  272455 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1228 06:57:34.893327  272455 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1228 06:57:34.901362  272455 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1228 06:57:34.902470  272455 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-479871" does not appear in /home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:57:34.902987  272455 kubeconfig.go:62] /home/jenkins/minikube-integration/22352-5550/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-479871" cluster setting kubeconfig missing "newest-cni-479871" context setting]
	I1228 06:57:34.903790  272455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/kubeconfig: {Name:mke0b45a06b272ee5aa493b62cdbe6f2a53c0aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:57:34.905714  272455 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1228 06:57:34.913890  272455 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1228 06:57:34.913925  272455 kubeadm.go:602] duration metric: took 20.639201ms to restartPrimaryControlPlane
	I1228 06:57:34.913951  272455 kubeadm.go:403] duration metric: took 92.928236ms to StartCluster
	I1228 06:57:34.913968  272455 settings.go:142] acquiring lock: {Name:mk84c1d4c127eaf11c7cc5cc16de86f86f2dcbe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:57:34.914077  272455 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:57:34.916170  272455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/kubeconfig: {Name:mke0b45a06b272ee5aa493b62cdbe6f2a53c0aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:57:34.916443  272455 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1228 06:57:34.916550  272455 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1228 06:57:34.916640  272455 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-479871"
	I1228 06:57:34.916658  272455 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-479871"
	I1228 06:57:34.916665  272455 config.go:182] Loaded profile config "newest-cni-479871": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:57:34.916676  272455 addons.go:70] Setting dashboard=true in profile "newest-cni-479871"
	I1228 06:57:34.916688  272455 addons.go:239] Setting addon dashboard=true in "newest-cni-479871"
	W1228 06:57:34.916695  272455 addons.go:248] addon dashboard should already be in state true
	W1228 06:57:34.916672  272455 addons.go:248] addon storage-provisioner should already be in state true
	I1228 06:57:34.916723  272455 host.go:66] Checking if "newest-cni-479871" exists ...
	I1228 06:57:34.916726  272455 host.go:66] Checking if "newest-cni-479871" exists ...
	I1228 06:57:34.916739  272455 addons.go:70] Setting default-storageclass=true in profile "newest-cni-479871"
	I1228 06:57:34.916767  272455 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-479871"
	I1228 06:57:34.917088  272455 cli_runner.go:164] Run: docker container inspect newest-cni-479871 --format={{.State.Status}}
	I1228 06:57:34.917213  272455 cli_runner.go:164] Run: docker container inspect newest-cni-479871 --format={{.State.Status}}
	I1228 06:57:34.917259  272455 cli_runner.go:164] Run: docker container inspect newest-cni-479871 --format={{.State.Status}}
	I1228 06:57:34.920009  272455 out.go:179] * Verifying Kubernetes components...
	I1228 06:57:34.921213  272455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:57:34.945670  272455 addons.go:239] Setting addon default-storageclass=true in "newest-cni-479871"
	W1228 06:57:34.945695  272455 addons.go:248] addon default-storageclass should already be in state true
	I1228 06:57:34.945722  272455 host.go:66] Checking if "newest-cni-479871" exists ...
	I1228 06:57:34.945994  272455 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1228 06:57:34.946234  272455 cli_runner.go:164] Run: docker container inspect newest-cni-479871 --format={{.State.Status}}
	I1228 06:57:34.948110  272455 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1228 06:57:34.949115  272455 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1228 06:57:32.668877  261568 pod_ready.go:104] pod "coredns-7d764666f9-9glh9" is not "Ready", error: <nil>
	W1228 06:57:34.672814  261568 pod_ready.go:104] pod "coredns-7d764666f9-9glh9" is not "Ready", error: <nil>
	I1228 06:57:34.631300  270987 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.857004545s
	I1228 06:57:36.280888  270987 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.503303433s
	I1228 06:57:36.297545  270987 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1228 06:57:36.307770  270987 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1228 06:57:36.317629  270987 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1228 06:57:36.317908  270987 kubeadm.go:319] [mark-control-plane] Marking the node auto-610916 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1228 06:57:36.325908  270987 kubeadm.go:319] [bootstrap-token] Using token: x4upak.rw4kta5bbgc527cy
	I1228 06:57:34.949173  272455 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1228 06:57:34.949184  272455 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1228 06:57:34.949246  272455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:57:34.950237  272455 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:57:34.950283  272455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1228 06:57:34.950337  272455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:57:34.977411  272455 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1228 06:57:34.977508  272455 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1228 06:57:34.977616  272455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:57:34.981507  272455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/newest-cni-479871/id_rsa Username:docker}
	I1228 06:57:34.987263  272455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/newest-cni-479871/id_rsa Username:docker}
	I1228 06:57:35.003464  272455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/newest-cni-479871/id_rsa Username:docker}
	I1228 06:57:35.061652  272455 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:57:35.075263  272455 api_server.go:52] waiting for apiserver process to appear ...
	I1228 06:57:35.075340  272455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 06:57:35.087464  272455 api_server.go:72] duration metric: took 170.983918ms to wait for apiserver process to appear ...
	I1228 06:57:35.087486  272455 api_server.go:88] waiting for apiserver healthz status ...
	I1228 06:57:35.087500  272455 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1228 06:57:35.088882  272455 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1228 06:57:35.088902  272455 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1228 06:57:35.093066  272455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:57:35.102903  272455 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1228 06:57:35.102928  272455 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1228 06:57:35.106367  272455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1228 06:57:35.116707  272455 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1228 06:57:35.116730  272455 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1228 06:57:35.136817  272455 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1228 06:57:35.136850  272455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1228 06:57:35.151009  272455 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1228 06:57:35.151157  272455 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1228 06:57:35.165198  272455 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1228 06:57:35.165226  272455 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1228 06:57:35.178269  272455 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1228 06:57:35.178288  272455 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1228 06:57:35.190857  272455 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1228 06:57:35.190880  272455 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1228 06:57:35.203155  272455 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 06:57:35.203179  272455 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1228 06:57:35.215454  272455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 06:57:36.608985  272455 api_server.go:325] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1228 06:57:36.609016  272455 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1228 06:57:36.609043  272455 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1228 06:57:36.697101  272455 api_server.go:325] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1228 06:57:36.697130  272455 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1228 06:57:37.087956  272455 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1228 06:57:37.092692  272455 api_server.go:325] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1228 06:57:37.092723  272455 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1228 06:57:37.261271  272455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.168167788s)
	I1228 06:57:37.261333  272455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.154938902s)
	I1228 06:57:37.261418  272455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.045932944s)
	I1228 06:57:37.263061  272455 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-479871 addons enable metrics-server
	
	I1228 06:57:37.272444  272455 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1228 06:57:37.273681  272455 addons.go:530] duration metric: took 2.35713092s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1228 06:57:37.587612  272455 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1228 06:57:37.591761  272455 api_server.go:325] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1228 06:57:37.591792  272455 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1228 06:57:36.327282  270987 out.go:252]   - Configuring RBAC rules ...
	I1228 06:57:36.327461  270987 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1228 06:57:36.330272  270987 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1228 06:57:36.336284  270987 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1228 06:57:36.338488  270987 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1228 06:57:36.341092  270987 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1228 06:57:36.343490  270987 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1228 06:57:36.692514  270987 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1228 06:57:37.105285  270987 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1228 06:57:37.683722  270987 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1228 06:57:37.685396  270987 kubeadm.go:319] 
	I1228 06:57:37.685528  270987 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1228 06:57:37.685536  270987 kubeadm.go:319] 
	I1228 06:57:37.685622  270987 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1228 06:57:37.685628  270987 kubeadm.go:319] 
	I1228 06:57:37.685658  270987 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1228 06:57:37.685724  270987 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1228 06:57:37.685783  270987 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1228 06:57:37.685795  270987 kubeadm.go:319] 
	I1228 06:57:37.685865  270987 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1228 06:57:37.685871  270987 kubeadm.go:319] 
	I1228 06:57:37.685926  270987 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1228 06:57:37.685932  270987 kubeadm.go:319] 
	I1228 06:57:37.685991  270987 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1228 06:57:37.686144  270987 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1228 06:57:37.686241  270987 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1228 06:57:37.686261  270987 kubeadm.go:319] 
	I1228 06:57:37.686360  270987 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1228 06:57:37.686451  270987 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1228 06:57:37.686457  270987 kubeadm.go:319] 
	I1228 06:57:37.686551  270987 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token x4upak.rw4kta5bbgc527cy \
	I1228 06:57:37.686668  270987 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6534497fd09654e1c9f62bf7a6763f446292593a08619861d4eab5a65759d2d4 \
	I1228 06:57:37.686696  270987 kubeadm.go:319] 	--control-plane 
	I1228 06:57:37.686702  270987 kubeadm.go:319] 
	I1228 06:57:37.686802  270987 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1228 06:57:37.686808  270987 kubeadm.go:319] 
	I1228 06:57:37.686907  270987 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token x4upak.rw4kta5bbgc527cy \
	I1228 06:57:37.687052  270987 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6534497fd09654e1c9f62bf7a6763f446292593a08619861d4eab5a65759d2d4 
	I1228 06:57:37.690764  270987 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1228 06:57:37.690909  270987 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1228 06:57:37.690947  270987 cni.go:84] Creating CNI manager for ""
	I1228 06:57:37.690960  270987 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1228 06:57:37.695299  270987 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1228 06:57:38.088255  272455 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1228 06:57:38.092765  272455 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1228 06:57:38.093731  272455 api_server.go:141] control plane version: v1.35.0
	I1228 06:57:38.093757  272455 api_server.go:131] duration metric: took 3.006264561s to wait for apiserver health ...
	I1228 06:57:38.093767  272455 system_pods.go:43] waiting for kube-system pods to appear ...
	I1228 06:57:38.097618  272455 system_pods.go:59] 8 kube-system pods found
	I1228 06:57:38.097656  272455 system_pods.go:61] "coredns-7d764666f9-cqtm4" [80bee88e-62a5-413c-9e2b-0cc274cf605d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1228 06:57:38.097679  272455 system_pods.go:61] "etcd-newest-cni-479871" [8bb011cd-dd9f-4176-b43a-5629132fbf66] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 06:57:38.097698  272455 system_pods.go:61] "kindnet-74fnf" [f610ca19-f52f-41ef-90d7-6ae6b47445da] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 06:57:38.097713  272455 system_pods.go:61] "kube-apiserver-newest-cni-479871" [a83949b2-d4ff-40cb-b0de-d4ba8547a489] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 06:57:38.097734  272455 system_pods.go:61] "kube-controller-manager-newest-cni-479871" [018c9a7d-7992-49db-afd0-8acc014b1976] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 06:57:38.097765  272455 system_pods.go:61] "kube-proxy-kzkbr" [a72ff074-7d43-4ea4-b42a-3a8e5e5fea1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 06:57:38.097774  272455 system_pods.go:61] "kube-scheduler-newest-cni-479871" [85dcc815-30f1-4c70-a83a-08ca392957f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 06:57:38.097782  272455 system_pods.go:61] "storage-provisioner" [267e9641-510e-4fac-a7f3-97501d5ada65] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1228 06:57:38.097791  272455 system_pods.go:74] duration metric: took 4.01547ms to wait for pod list to return data ...
	I1228 06:57:38.097805  272455 default_sa.go:34] waiting for default service account to be created ...
	I1228 06:57:38.100621  272455 default_sa.go:45] found service account: "default"
	I1228 06:57:38.100648  272455 default_sa.go:55] duration metric: took 2.834305ms for default service account to be created ...
	I1228 06:57:38.100670  272455 kubeadm.go:587] duration metric: took 3.184197442s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1228 06:57:38.100695  272455 node_conditions.go:102] verifying NodePressure condition ...
	I1228 06:57:38.103420  272455 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1228 06:57:38.103447  272455 node_conditions.go:123] node cpu capacity is 8
	I1228 06:57:38.103476  272455 node_conditions.go:105] duration metric: took 2.773785ms to run NodePressure ...
	I1228 06:57:38.103493  272455 start.go:242] waiting for startup goroutines ...
	I1228 06:57:38.103506  272455 start.go:247] waiting for cluster config update ...
	I1228 06:57:38.103520  272455 start.go:256] writing updated cluster config ...
	I1228 06:57:38.103880  272455 ssh_runner.go:195] Run: rm -f paused
	I1228 06:57:38.164632  272455 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1228 06:57:38.166186  272455 out.go:179] * Done! kubectl is now configured to use "newest-cni-479871" cluster and "default" namespace by default
	I1228 06:57:37.696615  270987 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1228 06:57:37.704918  270987 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1228 06:57:37.704940  270987 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1228 06:57:37.721562  270987 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1228 06:57:37.966215  270987 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1228 06:57:37.966290  270987 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:57:37.966290  270987 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-610916 minikube.k8s.io/updated_at=2025_12_28T06_57_37_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba minikube.k8s.io/name=auto-610916 minikube.k8s.io/primary=true
	I1228 06:57:37.975864  270987 ops.go:34] apiserver oom_adj: -16
	I1228 06:57:38.044350  270987 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:57:38.545015  270987 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1228 06:57:34.637977  260283 pod_ready.go:104] pod "coredns-7d764666f9-dmhdv" is not "Ready", error: <nil>
	W1228 06:57:36.638948  260283 pod_ready.go:104] pod "coredns-7d764666f9-dmhdv" is not "Ready", error: <nil>
	I1228 06:57:38.134693  260283 pod_ready.go:94] pod "coredns-7d764666f9-dmhdv" is "Ready"
	I1228 06:57:38.134723  260283 pod_ready.go:86] duration metric: took 38.005325036s for pod "coredns-7d764666f9-dmhdv" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:38.137194  260283 pod_ready.go:83] waiting for pod "etcd-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:38.141168  260283 pod_ready.go:94] pod "etcd-embed-certs-422591" is "Ready"
	I1228 06:57:38.141190  260283 pod_ready.go:86] duration metric: took 3.972263ms for pod "etcd-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:38.143197  260283 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:38.147889  260283 pod_ready.go:94] pod "kube-apiserver-embed-certs-422591" is "Ready"
	I1228 06:57:38.147949  260283 pod_ready.go:86] duration metric: took 4.729431ms for pod "kube-apiserver-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:38.150276  260283 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:38.333546  260283 pod_ready.go:94] pod "kube-controller-manager-embed-certs-422591" is "Ready"
	I1228 06:57:38.333571  260283 pod_ready.go:86] duration metric: took 183.273399ms for pod "kube-controller-manager-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:38.533116  260283 pod_ready.go:83] waiting for pod "kube-proxy-j2dkd" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:38.932854  260283 pod_ready.go:94] pod "kube-proxy-j2dkd" is "Ready"
	I1228 06:57:38.932876  260283 pod_ready.go:86] duration metric: took 399.738895ms for pod "kube-proxy-j2dkd" in "kube-system" namespace to be "Ready" or be gone ...
	W1228 06:57:37.169010  261568 pod_ready.go:104] pod "coredns-7d764666f9-9glh9" is not "Ready", error: <nil>
	I1228 06:57:37.669399  261568 pod_ready.go:94] pod "coredns-7d764666f9-9glh9" is "Ready"
	I1228 06:57:37.669431  261568 pod_ready.go:86] duration metric: took 35.50629669s for pod "coredns-7d764666f9-9glh9" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:37.672232  261568 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:37.676380  261568 pod_ready.go:94] pod "etcd-default-k8s-diff-port-500581" is "Ready"
	I1228 06:57:37.676406  261568 pod_ready.go:86] duration metric: took 4.14854ms for pod "etcd-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:37.678455  261568 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:37.682902  261568 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-500581" is "Ready"
	I1228 06:57:37.682927  261568 pod_ready.go:86] duration metric: took 4.444305ms for pod "kube-apiserver-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:37.685195  261568 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:37.867556  261568 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-500581" is "Ready"
	I1228 06:57:37.867590  261568 pod_ready.go:86] duration metric: took 182.371857ms for pod "kube-controller-manager-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:38.067941  261568 pod_ready.go:83] waiting for pod "kube-proxy-95gmh" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:38.467758  261568 pod_ready.go:94] pod "kube-proxy-95gmh" is "Ready"
	I1228 06:57:38.467785  261568 pod_ready.go:86] duration metric: took 399.816464ms for pod "kube-proxy-95gmh" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:38.666849  261568 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:39.067582  261568 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-500581" is "Ready"
	I1228 06:57:39.067613  261568 pod_ready.go:86] duration metric: took 400.734128ms for pod "kube-scheduler-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:39.067630  261568 pod_ready.go:40] duration metric: took 36.909076713s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 06:57:39.121498  261568 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1228 06:57:39.122908  261568 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-500581" cluster and "default" namespace by default
	I1228 06:57:39.133792  260283 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:39.532725  260283 pod_ready.go:94] pod "kube-scheduler-embed-certs-422591" is "Ready"
	I1228 06:57:39.532756  260283 pod_ready.go:86] duration metric: took 398.942325ms for pod "kube-scheduler-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:39.532771  260283 pod_ready.go:40] duration metric: took 39.407784586s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 06:57:39.588361  260283 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1228 06:57:39.590050  260283 out.go:179] * Done! kubectl is now configured to use "embed-certs-422591" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.404470269Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.407696544Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=89a44197-c21b-487e-9c8e-82c1e0e426c2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.408282858Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=1b354c58-729a-4a7d-a73e-7ade4846af6e name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.40912206Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.409732715Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.40999541Z" level=info msg="Ran pod sandbox 566249c473d84fa5225d39f78b4b4e5e4670840ac5b85a583e79394adf2ecb90 with infra container: kube-system/kube-proxy-kzkbr/POD" id=89a44197-c21b-487e-9c8e-82c1e0e426c2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.410409312Z" level=info msg="Ran pod sandbox da5ad34387ef1f0fc429e2a2c14273b245ba867a9f9d1e2569da280942fae5a0 with infra container: kube-system/kindnet-74fnf/POD" id=1b354c58-729a-4a7d-a73e-7ade4846af6e name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.410945912Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=ae404702-cde0-4e30-a0b6-5261be5da018 name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.411252605Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=aa1d84bc-8727-46dd-b809-b569f3ac6d04 name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.411854597Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=21e15870-c587-4911-b599-f71d112d901e name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.412113777Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=c2ba3029-3543-423d-aa1b-fca0c6033e7c name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.412788971Z" level=info msg="Creating container: kube-system/kube-proxy-kzkbr/kube-proxy" id=3242f498-7c1d-4014-8efb-48db9d073ed4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.412901497Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.413141534Z" level=info msg="Creating container: kube-system/kindnet-74fnf/kindnet-cni" id=286b9599-20a9-49b0-acc9-3585126af411 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.41322754Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.417593085Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.418103467Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.418456264Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.418957332Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.444644467Z" level=info msg="Created container bd147eb423bfd84a96a1a81a90e2399054e85de7afa31369083adca87be94792: kube-system/kindnet-74fnf/kindnet-cni" id=286b9599-20a9-49b0-acc9-3585126af411 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.445192368Z" level=info msg="Starting container: bd147eb423bfd84a96a1a81a90e2399054e85de7afa31369083adca87be94792" id=db25a680-bd30-4174-84db-6502167711d9 name=/runtime.v1.RuntimeService/StartContainer
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.446507439Z" level=info msg="Created container 39fbe283169d126c6b1aaeb893e40691f5c91d6c14186ae4e5010ff8f2d542fd: kube-system/kube-proxy-kzkbr/kube-proxy" id=3242f498-7c1d-4014-8efb-48db9d073ed4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.447095604Z" level=info msg="Starting container: 39fbe283169d126c6b1aaeb893e40691f5c91d6c14186ae4e5010ff8f2d542fd" id=efcd6508-da2e-4631-bf56-721c6701e82a name=/runtime.v1.RuntimeService/StartContainer
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.447391048Z" level=info msg="Started container" PID=1054 containerID=bd147eb423bfd84a96a1a81a90e2399054e85de7afa31369083adca87be94792 description=kube-system/kindnet-74fnf/kindnet-cni id=db25a680-bd30-4174-84db-6502167711d9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=da5ad34387ef1f0fc429e2a2c14273b245ba867a9f9d1e2569da280942fae5a0
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.44978713Z" level=info msg="Started container" PID=1053 containerID=39fbe283169d126c6b1aaeb893e40691f5c91d6c14186ae4e5010ff8f2d542fd description=kube-system/kube-proxy-kzkbr/kube-proxy id=efcd6508-da2e-4631-bf56-721c6701e82a name=/runtime.v1.RuntimeService/StartContainer sandboxID=566249c473d84fa5225d39f78b4b4e5e4670840ac5b85a583e79394adf2ecb90
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	bd147eb423bfd       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251   4 seconds ago       Running             kindnet-cni               1                   da5ad34387ef1       kindnet-74fnf                               kube-system
	39fbe283169d1       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8   4 seconds ago       Running             kube-proxy                1                   566249c473d84       kube-proxy-kzkbr                            kube-system
	99b26115b080a       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508   7 seconds ago       Running             kube-controller-manager   1                   c8c2d2ed12122       kube-controller-manager-newest-cni-479871   kube-system
	ff6b6b4161634       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc   7 seconds ago       Running             kube-scheduler            1                   821adb7916bf1       kube-scheduler-newest-cni-479871            kube-system
	5b07da5a20ba2       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499   7 seconds ago       Running             kube-apiserver            1                   55b5466f5af79       kube-apiserver-newest-cni-479871            kube-system
	8184f33d790d9       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2   7 seconds ago       Running             etcd                      1                   935b57d6942e3       etcd-newest-cni-479871                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-479871
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-479871
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba
	                    minikube.k8s.io/name=newest-cni-479871
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_28T06_57_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Dec 2025 06:57:04 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-479871
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 28 Dec 2025 06:57:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 28 Dec 2025 06:57:36 +0000   Sun, 28 Dec 2025 06:57:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 28 Dec 2025 06:57:36 +0000   Sun, 28 Dec 2025 06:57:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 28 Dec 2025 06:57:36 +0000   Sun, 28 Dec 2025 06:57:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 28 Dec 2025 06:57:36 +0000   Sun, 28 Dec 2025 06:57:02 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-479871
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 493159aea3d8b8768b108b926950835d
	  System UUID:                c74e85f6-b22b-4d3f-a221-99d5faff29cc
	  Boot ID:                    e7a1d175-ccf2-4135-b9c7-3a9f70f4c4af
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-479871                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         36s
	  kube-system                 kindnet-74fnf                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-newest-cni-479871             250m (3%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-controller-manager-newest-cni-479871    200m (2%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-proxy-kzkbr                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-newest-cni-479871             100m (1%)     0 (0%)      0 (0%)           0 (0%)         36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  32s   node-controller  Node newest-cni-479871 event: Registered Node newest-cni-479871 in Controller
	  Normal  RegisteredNode  3s    node-controller  Node newest-cni-479871 event: Registered Node newest-cni-479871 in Controller
	
	
	==> dmesg <==
	[Dec28 06:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001811] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000998] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.386099] i8042: Warning: Keylock active
	[  +0.010472] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.485785] block sda: the capability attribute has been deprecated.
	[  +0.082391] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024584] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.071522] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 06:57:42 up 40 min,  0 user,  load average: 5.23, 3.33, 2.01
	Linux newest-cni-479871 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 28 06:57:36 newest-cni-479871 kubelet[674]: E1228 06:57:36.836675     674 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-479871\" already exists" pod="kube-system/kube-apiserver-newest-cni-479871"
	Dec 28 06:57:36 newest-cni-479871 kubelet[674]: I1228 06:57:36.836716     674 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-479871"
	Dec 28 06:57:36 newest-cni-479871 kubelet[674]: E1228 06:57:36.843284     674 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-479871\" already exists" pod="kube-system/kube-controller-manager-newest-cni-479871"
	Dec 28 06:57:36 newest-cni-479871 kubelet[674]: I1228 06:57:36.844419     674 kubelet_node_status.go:123] "Node was previously registered" node="newest-cni-479871"
	Dec 28 06:57:36 newest-cni-479871 kubelet[674]: I1228 06:57:36.844492     674 kubelet_node_status.go:77] "Successfully registered node" node="newest-cni-479871"
	Dec 28 06:57:36 newest-cni-479871 kubelet[674]: I1228 06:57:36.844518     674 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 28 06:57:36 newest-cni-479871 kubelet[674]: I1228 06:57:36.845383     674 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 28 06:57:37 newest-cni-479871 kubelet[674]: I1228 06:57:37.094638     674 apiserver.go:52] "Watching apiserver"
	Dec 28 06:57:37 newest-cni-479871 kubelet[674]: E1228 06:57:37.100792     674 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-479871" containerName="kube-controller-manager"
	Dec 28 06:57:37 newest-cni-479871 kubelet[674]: E1228 06:57:37.141236     674 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-479871" containerName="etcd"
	Dec 28 06:57:37 newest-cni-479871 kubelet[674]: I1228 06:57:37.141373     674 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-479871"
	Dec 28 06:57:37 newest-cni-479871 kubelet[674]: E1228 06:57:37.141715     674 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-479871" containerName="kube-apiserver"
	Dec 28 06:57:37 newest-cni-479871 kubelet[674]: E1228 06:57:37.146493     674 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-479871\" already exists" pod="kube-system/kube-scheduler-newest-cni-479871"
	Dec 28 06:57:37 newest-cni-479871 kubelet[674]: E1228 06:57:37.146622     674 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-479871" containerName="kube-scheduler"
	Dec 28 06:57:37 newest-cni-479871 kubelet[674]: I1228 06:57:37.198314     674 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 28 06:57:37 newest-cni-479871 kubelet[674]: I1228 06:57:37.273887     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a72ff074-7d43-4ea4-b42a-3a8e5e5fea1d-xtables-lock\") pod \"kube-proxy-kzkbr\" (UID: \"a72ff074-7d43-4ea4-b42a-3a8e5e5fea1d\") " pod="kube-system/kube-proxy-kzkbr"
	Dec 28 06:57:37 newest-cni-479871 kubelet[674]: I1228 06:57:37.273928     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f610ca19-f52f-41ef-90d7-6ae6b47445da-xtables-lock\") pod \"kindnet-74fnf\" (UID: \"f610ca19-f52f-41ef-90d7-6ae6b47445da\") " pod="kube-system/kindnet-74fnf"
	Dec 28 06:57:37 newest-cni-479871 kubelet[674]: I1228 06:57:37.274185     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a72ff074-7d43-4ea4-b42a-3a8e5e5fea1d-lib-modules\") pod \"kube-proxy-kzkbr\" (UID: \"a72ff074-7d43-4ea4-b42a-3a8e5e5fea1d\") " pod="kube-system/kube-proxy-kzkbr"
	Dec 28 06:57:37 newest-cni-479871 kubelet[674]: I1228 06:57:37.274241     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f610ca19-f52f-41ef-90d7-6ae6b47445da-lib-modules\") pod \"kindnet-74fnf\" (UID: \"f610ca19-f52f-41ef-90d7-6ae6b47445da\") " pod="kube-system/kindnet-74fnf"
	Dec 28 06:57:37 newest-cni-479871 kubelet[674]: I1228 06:57:37.274374     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f610ca19-f52f-41ef-90d7-6ae6b47445da-cni-cfg\") pod \"kindnet-74fnf\" (UID: \"f610ca19-f52f-41ef-90d7-6ae6b47445da\") " pod="kube-system/kindnet-74fnf"
	Dec 28 06:57:38 newest-cni-479871 kubelet[674]: E1228 06:57:38.148927     674 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-479871" containerName="kube-scheduler"
	Dec 28 06:57:39 newest-cni-479871 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 28 06:57:39 newest-cni-479871 kubelet[674]: I1228 06:57:39.160996     674 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 28 06:57:39 newest-cni-479871 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 28 06:57:39 newest-cni-479871 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1228 06:57:41.773362  276297 logs.go:279] Failed to list containers for "kube-apiserver": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:41Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:41.835731  276297 logs.go:279] Failed to list containers for "etcd": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:41Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:41.897288  276297 logs.go:279] Failed to list containers for "coredns": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:41Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:41.959541  276297 logs.go:279] Failed to list containers for "kube-scheduler": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:41Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:42.027241  276297 logs.go:279] Failed to list containers for "kube-proxy": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:42Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:42.104900  276297 logs.go:279] Failed to list containers for "kube-controller-manager": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:42Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:42.170298  276297 logs.go:279] Failed to list containers for "kindnet": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:42Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:42.234788  276297 logs.go:279] Failed to list containers for "storage-provisioner": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:42Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:42.300762  276297 logs.go:279] Failed to list containers for "kubernetes-dashboard": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:42Z" level=error msg="open /run/runc: no such file or directory"

                                                
                                                
** /stderr **
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-479871 -n newest-cni-479871
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-479871 -n newest-cni-479871: exit status 2 (425.815704ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-479871 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-cqtm4 storage-provisioner dashboard-metrics-scraper-867fb5f87b-xjkqj kubernetes-dashboard-b84665fb8-854wv
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-479871 describe pod coredns-7d764666f9-cqtm4 storage-provisioner dashboard-metrics-scraper-867fb5f87b-xjkqj kubernetes-dashboard-b84665fb8-854wv
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-479871 describe pod coredns-7d764666f9-cqtm4 storage-provisioner dashboard-metrics-scraper-867fb5f87b-xjkqj kubernetes-dashboard-b84665fb8-854wv: exit status 1 (79.751362ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-cqtm4" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-xjkqj" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-854wv" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-479871 describe pod coredns-7d764666f9-cqtm4 storage-provisioner dashboard-metrics-scraper-867fb5f87b-xjkqj kubernetes-dashboard-b84665fb8-854wv: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-479871
helpers_test.go:244: (dbg) docker inspect newest-cni-479871:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c33fbf6c53872baefd350c2b3a39d059949b68fa85b9a30cb0befeb93666d2b8",
	        "Created": "2025-12-28T06:56:54.089539242Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 272730,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-28T06:57:27.849721622Z",
	            "FinishedAt": "2025-12-28T06:57:26.934247109Z"
	        },
	        "Image": "sha256:8b8cccb9afb2a57c3d011fcf33e0403b1551aa7036e30b12a395646869801935",
	        "ResolvConfPath": "/var/lib/docker/containers/c33fbf6c53872baefd350c2b3a39d059949b68fa85b9a30cb0befeb93666d2b8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c33fbf6c53872baefd350c2b3a39d059949b68fa85b9a30cb0befeb93666d2b8/hostname",
	        "HostsPath": "/var/lib/docker/containers/c33fbf6c53872baefd350c2b3a39d059949b68fa85b9a30cb0befeb93666d2b8/hosts",
	        "LogPath": "/var/lib/docker/containers/c33fbf6c53872baefd350c2b3a39d059949b68fa85b9a30cb0befeb93666d2b8/c33fbf6c53872baefd350c2b3a39d059949b68fa85b9a30cb0befeb93666d2b8-json.log",
	        "Name": "/newest-cni-479871",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-479871:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-479871",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c33fbf6c53872baefd350c2b3a39d059949b68fa85b9a30cb0befeb93666d2b8",
	                "LowerDir": "/var/lib/docker/overlay2/521077bfa31a3e28dde97a8d39e5454f7bedd3f36c3b2f69239bf54eb94597b0-init/diff:/var/lib/docker/overlay2/69e554713d6cc3cb33e7ea5f93430536a8ca0db38320574d3719c26f00b2f62c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/521077bfa31a3e28dde97a8d39e5454f7bedd3f36c3b2f69239bf54eb94597b0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/521077bfa31a3e28dde97a8d39e5454f7bedd3f36c3b2f69239bf54eb94597b0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/521077bfa31a3e28dde97a8d39e5454f7bedd3f36c3b2f69239bf54eb94597b0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-479871",
	                "Source": "/var/lib/docker/volumes/newest-cni-479871/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-479871",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-479871",
	                "name.minikube.sigs.k8s.io": "newest-cni-479871",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "fe586bcaf21681d7e37e4e7c893ff8d1d4575d0df1323df30e945f5154ed01bf",
	            "SandboxKey": "/var/run/docker/netns/fe586bcaf216",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-479871": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9e93d5a2f53a18661bc1ad8ca49ab2b022bc0bcfa1a555873e6d7e016530b0cb",
	                    "EndpointID": "97ef2ce8d5fe85f46dff4e5cdd93862933dd5ee622a0ac89e41ea30a61725303",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "de:7f:c9:8b:c5:51",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-479871",
	                        "c33fbf6c5387"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-479871 -n newest-cni-479871
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-479871 -n newest-cni-479871: exit status 2 (389.093144ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-479871 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-479871 logs -n 25: (1.161004741s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p no-preload-950460 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ addons  │ enable metrics-server -p embed-certs-422591 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	│ stop    │ -p embed-certs-422591 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-500581 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-500581 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ image   │ old-k8s-version-694122 image list --format=json                                                                                                                                                                                               │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ pause   │ -p old-k8s-version-694122 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │                     │
	│ delete  │ -p old-k8s-version-694122                                                                                                                                                                                                                     │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ addons  │ enable dashboard -p embed-certs-422591 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ start   │ -p embed-certs-422591 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:57 UTC │
	│ delete  │ -p old-k8s-version-694122                                                                                                                                                                                                                     │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ start   │ -p newest-cni-479871 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:57 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-500581 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ start   │ -p default-k8s-diff-port-500581 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:57 UTC │
	│ image   │ no-preload-950460 image list --format=json                                                                                                                                                                                                    │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ pause   │ -p no-preload-950460 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-479871 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │                     │
	│ stop    │ -p newest-cni-479871 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ delete  │ -p no-preload-950460                                                                                                                                                                                                                          │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ delete  │ -p no-preload-950460                                                                                                                                                                                                                          │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ start   │ -p auto-610916 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-610916                  │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-479871 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ start   │ -p newest-cni-479871 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ image   │ newest-cni-479871 image list --format=json                                                                                                                                                                                                    │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ pause   │ -p newest-cni-479871 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 06:57:27
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 06:57:27.608626  272455 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:57:27.608891  272455 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:57:27.608901  272455 out.go:374] Setting ErrFile to fd 2...
	I1228 06:57:27.608905  272455 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:57:27.609095  272455 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:57:27.609520  272455 out.go:368] Setting JSON to false
	I1228 06:57:27.610841  272455 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2400,"bootTime":1766902648,"procs":512,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1228 06:57:27.610899  272455 start.go:143] virtualization: kvm guest
	I1228 06:57:27.613002  272455 out.go:179] * [newest-cni-479871] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1228 06:57:27.614314  272455 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 06:57:27.614309  272455 notify.go:221] Checking for updates...
	I1228 06:57:27.615685  272455 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 06:57:27.617024  272455 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:57:27.618436  272455 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-5550/.minikube
	I1228 06:57:27.619651  272455 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1228 06:57:27.620801  272455 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 06:57:27.622394  272455 config.go:182] Loaded profile config "newest-cni-479871": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:57:27.622871  272455 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 06:57:27.649730  272455 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1228 06:57:27.649858  272455 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:57:27.711778  272455 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-28 06:57:27.701041109 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:57:27.711909  272455 docker.go:319] overlay module found
	I1228 06:57:27.714695  272455 out.go:179] * Using the docker driver based on existing profile
	I1228 06:57:27.715814  272455 start.go:309] selected driver: docker
	I1228 06:57:27.715827  272455 start.go:928] validating driver "docker" against &{Name:newest-cni-479871 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-479871 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:57:27.715906  272455 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 06:57:27.716544  272455 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:57:27.774275  272455 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-28 06:57:27.762678993 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:57:27.774595  272455 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1228 06:57:27.774629  272455 cni.go:84] Creating CNI manager for ""
	I1228 06:57:27.774690  272455 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1228 06:57:27.774727  272455 start.go:353] cluster config:
	{Name:newest-cni-479871 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-479871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:57:27.776533  272455 out.go:179] * Starting "newest-cni-479871" primary control-plane node in "newest-cni-479871" cluster
	I1228 06:57:27.777655  272455 cache.go:134] Beginning downloading kic base image for docker with crio
	I1228 06:57:27.778822  272455 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 06:57:27.779911  272455 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1228 06:57:27.779958  272455 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 06:57:27.779956  272455 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22352-5550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1228 06:57:27.780007  272455 cache.go:65] Caching tarball of preloaded images
	I1228 06:57:27.780266  272455 preload.go:251] Found /home/jenkins/minikube-integration/22352-5550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1228 06:57:27.780296  272455 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1228 06:57:27.780460  272455 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/config.json ...
	I1228 06:57:27.801449  272455 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 06:57:27.801472  272455 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
	I1228 06:57:27.801491  272455 cache.go:243] Successfully downloaded all kic artifacts
	I1228 06:57:27.801528  272455 start.go:360] acquireMachinesLock for newest-cni-479871: {Name:mk0ffa1f12460094192ce711dd360a9389869f0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:57:27.801591  272455 start.go:364] duration metric: took 40.857µs to acquireMachinesLock for "newest-cni-479871"
	I1228 06:57:27.801613  272455 start.go:96] Skipping create...Using existing machine configuration
	I1228 06:57:27.801619  272455 fix.go:54] fixHost starting: 
	I1228 06:57:27.801874  272455 cli_runner.go:164] Run: docker container inspect newest-cni-479871 --format={{.State.Status}}
	I1228 06:57:27.820884  272455 fix.go:112] recreateIfNeeded on newest-cni-479871: state=Stopped err=<nil>
	W1228 06:57:27.820925  272455 fix.go:138] unexpected machine state, will restart: <nil>
	I1228 06:57:24.037107  270987 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22352-5550/.minikube/machines/auto-610916/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1228 06:57:24.066848  270987 cli_runner.go:164] Run: docker container inspect auto-610916 --format={{.State.Status}}
	I1228 06:57:24.086253  270987 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1228 06:57:24.086272  270987 kic_runner.go:114] Args: [docker exec --privileged auto-610916 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1228 06:57:24.132042  270987 cli_runner.go:164] Run: docker container inspect auto-610916 --format={{.State.Status}}
	I1228 06:57:24.155369  270987 machine.go:94] provisionDockerMachine start ...
	I1228 06:57:24.155446  270987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-610916
	I1228 06:57:24.175970  270987 main.go:144] libmachine: Using SSH client type: native
	I1228 06:57:24.176342  270987 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1228 06:57:24.176366  270987 main.go:144] libmachine: About to run SSH command:
	hostname
	I1228 06:57:24.177174  270987 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33426->127.0.0.1:33098: read: connection reset by peer
	I1228 06:57:27.305442  270987 main.go:144] libmachine: SSH cmd err, output: <nil>: auto-610916
	
	I1228 06:57:27.305473  270987 ubuntu.go:182] provisioning hostname "auto-610916"
	I1228 06:57:27.305525  270987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-610916
	I1228 06:57:27.325728  270987 main.go:144] libmachine: Using SSH client type: native
	I1228 06:57:27.325955  270987 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1228 06:57:27.325974  270987 main.go:144] libmachine: About to run SSH command:
	sudo hostname auto-610916 && echo "auto-610916" | sudo tee /etc/hostname
	I1228 06:57:27.465700  270987 main.go:144] libmachine: SSH cmd err, output: <nil>: auto-610916
	
	I1228 06:57:27.465786  270987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-610916
	I1228 06:57:27.485558  270987 main.go:144] libmachine: Using SSH client type: native
	I1228 06:57:27.485776  270987 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1228 06:57:27.485793  270987 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-610916' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-610916/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-610916' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1228 06:57:27.614741  270987 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1228 06:57:27.614772  270987 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22352-5550/.minikube CaCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22352-5550/.minikube}
	I1228 06:57:27.614794  270987 ubuntu.go:190] setting up certificates
	I1228 06:57:27.614809  270987 provision.go:84] configureAuth start
	I1228 06:57:27.614857  270987 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-610916
	I1228 06:57:27.636073  270987 provision.go:143] copyHostCerts
	I1228 06:57:27.636153  270987 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem, removing ...
	I1228 06:57:27.636170  270987 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem
	I1228 06:57:27.636258  270987 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem (1082 bytes)
	I1228 06:57:27.636381  270987 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem, removing ...
	I1228 06:57:27.636394  270987 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem
	I1228 06:57:27.636443  270987 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem (1123 bytes)
	I1228 06:57:27.636533  270987 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem, removing ...
	I1228 06:57:27.636544  270987 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem
	I1228 06:57:27.636587  270987 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem (1679 bytes)
	I1228 06:57:27.636670  270987 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem org=jenkins.auto-610916 san=[127.0.0.1 192.168.94.2 auto-610916 localhost minikube]
	I1228 06:57:27.665458  270987 provision.go:177] copyRemoteCerts
	I1228 06:57:27.665531  270987 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1228 06:57:27.665584  270987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-610916
	I1228 06:57:27.690153  270987 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/auto-610916/id_rsa Username:docker}
	I1228 06:57:27.787684  270987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1228 06:57:27.807992  270987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1228 06:57:27.826888  270987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1228 06:57:27.846707  270987 provision.go:87] duration metric: took 231.883622ms to configureAuth
	I1228 06:57:27.846738  270987 ubuntu.go:206] setting minikube options for container-runtime
	I1228 06:57:27.846951  270987 config.go:182] Loaded profile config "auto-610916": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:57:27.847088  270987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-610916
	I1228 06:57:27.867779  270987 main.go:144] libmachine: Using SSH client type: native
	I1228 06:57:27.868099  270987 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1228 06:57:27.868129  270987 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1228 06:57:28.172836  270987 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1228 06:57:28.172866  270987 machine.go:97] duration metric: took 4.017476194s to provisionDockerMachine
	I1228 06:57:28.172880  270987 client.go:176] duration metric: took 8.942276035s to LocalClient.Create
	I1228 06:57:28.172906  270987 start.go:167] duration metric: took 8.942337637s to libmachine.API.Create "auto-610916"
	I1228 06:57:28.172920  270987 start.go:293] postStartSetup for "auto-610916" (driver="docker")
	I1228 06:57:28.172932  270987 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1228 06:57:28.173050  270987 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1228 06:57:28.173126  270987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-610916
	I1228 06:57:28.193062  270987 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/auto-610916/id_rsa Username:docker}
	I1228 06:57:28.298659  270987 ssh_runner.go:195] Run: cat /etc/os-release
	I1228 06:57:28.302677  270987 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1228 06:57:28.302714  270987 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1228 06:57:28.302728  270987 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-5550/.minikube/addons for local assets ...
	I1228 06:57:28.302789  270987 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-5550/.minikube/files for local assets ...
	I1228 06:57:28.302894  270987 filesync.go:149] local asset: /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem -> 90762.pem in /etc/ssl/certs
	I1228 06:57:28.303043  270987 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1228 06:57:28.311353  270987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem --> /etc/ssl/certs/90762.pem (1708 bytes)
	I1228 06:57:28.335660  270987 start.go:296] duration metric: took 162.723362ms for postStartSetup
	I1228 06:57:28.336202  270987 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-610916
	I1228 06:57:28.358542  270987 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/config.json ...
	I1228 06:57:28.358812  270987 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 06:57:28.358865  270987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-610916
	I1228 06:57:28.378973  270987 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/auto-610916/id_rsa Username:docker}
	I1228 06:57:28.477523  270987 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1228 06:57:28.482381  270987 start.go:128] duration metric: took 9.253964147s to createHost
	I1228 06:57:28.482404  270987 start.go:83] releasing machines lock for "auto-610916", held for 9.254082575s
	I1228 06:57:28.482463  270987 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-610916
	I1228 06:57:28.500103  270987 ssh_runner.go:195] Run: cat /version.json
	I1228 06:57:28.500154  270987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-610916
	I1228 06:57:28.500204  270987 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1228 06:57:28.500271  270987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-610916
	I1228 06:57:28.518391  270987 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/auto-610916/id_rsa Username:docker}
	I1228 06:57:28.518847  270987 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/auto-610916/id_rsa Username:docker}
	I1228 06:57:28.660774  270987 ssh_runner.go:195] Run: systemctl --version
	I1228 06:57:28.668879  270987 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1228 06:57:28.704105  270987 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1228 06:57:28.708577  270987 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1228 06:57:28.708639  270987 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1228 06:57:28.735158  270987 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1228 06:57:28.735181  270987 start.go:496] detecting cgroup driver to use...
	I1228 06:57:28.735212  270987 detect.go:190] detected "systemd" cgroup driver on host os
	I1228 06:57:28.735254  270987 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1228 06:57:28.750799  270987 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1228 06:57:28.763144  270987 docker.go:218] disabling cri-docker service (if available) ...
	I1228 06:57:28.763190  270987 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1228 06:57:28.780178  270987 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1228 06:57:28.796822  270987 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1228 06:57:28.878184  270987 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1228 06:57:28.967161  270987 docker.go:234] disabling docker service ...
	I1228 06:57:28.967217  270987 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1228 06:57:28.986533  270987 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1228 06:57:28.999718  270987 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	W1228 06:57:26.134734  260283 pod_ready.go:104] pod "coredns-7d764666f9-dmhdv" is not "Ready", error: <nil>
	W1228 06:57:28.135917  260283 pod_ready.go:104] pod "coredns-7d764666f9-dmhdv" is not "Ready", error: <nil>
	I1228 06:57:29.087789  270987 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1228 06:57:29.174616  270987 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1228 06:57:29.186986  270987 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 06:57:29.200989  270987 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1228 06:57:29.201053  270987 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:57:29.210957  270987 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1228 06:57:29.211010  270987 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:57:29.219703  270987 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:57:29.227840  270987 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:57:29.236513  270987 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1228 06:57:29.244294  270987 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:57:29.252649  270987 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:57:29.266492  270987 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:57:29.275948  270987 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1228 06:57:29.283764  270987 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1228 06:57:29.291555  270987 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:57:29.369411  270987 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1228 06:57:29.504432  270987 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1228 06:57:29.504495  270987 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1228 06:57:29.508431  270987 start.go:574] Will wait 60s for crictl version
	I1228 06:57:29.508476  270987 ssh_runner.go:195] Run: which crictl
	I1228 06:57:29.511936  270987 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1228 06:57:29.535964  270987 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1228 06:57:29.536070  270987 ssh_runner.go:195] Run: crio --version
	I1228 06:57:29.564818  270987 ssh_runner.go:195] Run: crio --version
	I1228 06:57:29.593580  270987 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1228 06:57:29.594684  270987 cli_runner.go:164] Run: docker network inspect auto-610916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 06:57:29.611580  270987 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1228 06:57:29.615568  270987 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 06:57:29.626319  270987 kubeadm.go:884] updating cluster {Name:auto-610916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-610916 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 06:57:29.626423  270987 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1228 06:57:29.626465  270987 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 06:57:29.661197  270987 crio.go:631] all images are preloaded for cri-o runtime.
	I1228 06:57:29.661223  270987 crio.go:503] Images already preloaded, skipping extraction
	I1228 06:57:29.661276  270987 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 06:57:29.687846  270987 crio.go:631] all images are preloaded for cri-o runtime.
	I1228 06:57:29.687866  270987 cache_images.go:86] Images are preloaded, skipping loading
	I1228 06:57:29.687873  270987 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0 crio true true} ...
	I1228 06:57:29.687961  270987 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-610916 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:auto-610916 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 06:57:29.688049  270987 ssh_runner.go:195] Run: crio config
	I1228 06:57:29.733607  270987 cni.go:84] Creating CNI manager for ""
	I1228 06:57:29.733628  270987 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1228 06:57:29.733642  270987 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1228 06:57:29.733663  270987 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-610916 NodeName:auto-610916 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 06:57:29.733768  270987 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-610916"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 06:57:29.733822  270987 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1228 06:57:29.742276  270987 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 06:57:29.742348  270987 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 06:57:29.750012  270987 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1228 06:57:29.762831  270987 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 06:57:29.778755  270987 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2207 bytes)
	I1228 06:57:29.791627  270987 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1228 06:57:29.795195  270987 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 06:57:29.804912  270987 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:57:29.886268  270987 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:57:29.916647  270987 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916 for IP: 192.168.94.2
	I1228 06:57:29.916668  270987 certs.go:195] generating shared ca certs ...
	I1228 06:57:29.916685  270987 certs.go:227] acquiring lock for ca certs: {Name:mk77ee411d20e2d367f536371cb4debf1ce5f664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:57:29.916841  270987 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-5550/.minikube/ca.key
	I1228 06:57:29.916901  270987 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.key
	I1228 06:57:29.916926  270987 certs.go:257] generating profile certs ...
	I1228 06:57:29.916992  270987 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/client.key
	I1228 06:57:29.917012  270987 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/client.crt with IP's: []
	I1228 06:57:30.044926  270987 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/client.crt ...
	I1228 06:57:30.044958  270987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/client.crt: {Name:mk1312f0b2a2032c49b00ad683907b4a293451e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:57:30.045195  270987 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/client.key ...
	I1228 06:57:30.045216  270987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/client.key: {Name:mk493068c73b3fffa7862dab2db2e5af2ad4cdf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:57:30.045363  270987 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/apiserver.key.fdf6da28
	I1228 06:57:30.045389  270987 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/apiserver.crt.fdf6da28 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1228 06:57:30.158189  270987 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/apiserver.crt.fdf6da28 ...
	I1228 06:57:30.158218  270987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/apiserver.crt.fdf6da28: {Name:mk504cc05bb8ae655c586e0de3d0329e16c65b75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:57:30.158421  270987 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/apiserver.key.fdf6da28 ...
	I1228 06:57:30.158440  270987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/apiserver.key.fdf6da28: {Name:mk9ef5b5525f742323f13a2a32ffa9eba64f0142 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:57:30.158555  270987 certs.go:382] copying /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/apiserver.crt.fdf6da28 -> /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/apiserver.crt
	I1228 06:57:30.158659  270987 certs.go:386] copying /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/apiserver.key.fdf6da28 -> /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/apiserver.key
	I1228 06:57:30.158741  270987 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/proxy-client.key
	I1228 06:57:30.158759  270987 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/proxy-client.crt with IP's: []
	I1228 06:57:30.270524  270987 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/proxy-client.crt ...
	I1228 06:57:30.270554  270987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/proxy-client.crt: {Name:mka63596e1271c706e6b7eac62cdc7cc3ca4865f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:57:30.270721  270987 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/proxy-client.key ...
	I1228 06:57:30.270744  270987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/proxy-client.key: {Name:mkf21e3e4aab416c5d1d32fdb4276fe2f2f42020 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:57:30.270957  270987 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076.pem (1338 bytes)
	W1228 06:57:30.271003  270987 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076_empty.pem, impossibly tiny 0 bytes
	I1228 06:57:30.271013  270987 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem (1675 bytes)
	I1228 06:57:30.271055  270987 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem (1082 bytes)
	I1228 06:57:30.271090  270987 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem (1123 bytes)
	I1228 06:57:30.271114  270987 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem (1679 bytes)
	I1228 06:57:30.271159  270987 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem (1708 bytes)
	I1228 06:57:30.271792  270987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 06:57:30.294411  270987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1228 06:57:30.313499  270987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 06:57:30.331456  270987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 06:57:30.348879  270987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1228 06:57:30.366165  270987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1228 06:57:30.383872  270987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 06:57:30.400633  270987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/auto-610916/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1228 06:57:30.417868  270987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem --> /usr/share/ca-certificates/90762.pem (1708 bytes)
	I1228 06:57:30.436915  270987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 06:57:30.454672  270987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076.pem --> /usr/share/ca-certificates/9076.pem (1338 bytes)
	I1228 06:57:30.471864  270987 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 06:57:30.484411  270987 ssh_runner.go:195] Run: openssl version
	I1228 06:57:30.490518  270987 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/90762.pem
	I1228 06:57:30.498111  270987 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/90762.pem /etc/ssl/certs/90762.pem
	I1228 06:57:30.505463  270987 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/90762.pem
	I1228 06:57:30.509114  270987 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:31 /usr/share/ca-certificates/90762.pem
	I1228 06:57:30.509169  270987 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/90762.pem
	I1228 06:57:30.544782  270987 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 06:57:30.552986  270987 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/90762.pem /etc/ssl/certs/3ec20f2e.0
	I1228 06:57:30.560283  270987 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:57:30.567532  270987 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 06:57:30.575539  270987 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:57:30.579221  270987 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:28 /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:57:30.579276  270987 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:57:30.613791  270987 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 06:57:30.621616  270987 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1228 06:57:30.629144  270987 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9076.pem
	I1228 06:57:30.636839  270987 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9076.pem /etc/ssl/certs/9076.pem
	I1228 06:57:30.644526  270987 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9076.pem
	I1228 06:57:30.648439  270987 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:31 /usr/share/ca-certificates/9076.pem
	I1228 06:57:30.648495  270987 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9076.pem
	I1228 06:57:30.687718  270987 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 06:57:30.695733  270987 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9076.pem /etc/ssl/certs/51391683.0
	I1228 06:57:30.703073  270987 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 06:57:30.706590  270987 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1228 06:57:30.706638  270987 kubeadm.go:401] StartCluster: {Name:auto-610916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-610916 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:57:30.706712  270987 ssh_runner.go:195] Run: sudo crio config
	I1228 06:57:30.753850  270987 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	W1228 06:57:30.765149  270987 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:30Z" level=error msg="open /run/runc: no such file or directory"
	I1228 06:57:30.765231  270987 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 06:57:30.772721  270987 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1228 06:57:30.780189  270987 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1228 06:57:30.780230  270987 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1228 06:57:30.787573  270987 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1228 06:57:30.787587  270987 kubeadm.go:158] found existing configuration files:
	
	I1228 06:57:30.787634  270987 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1228 06:57:30.795410  270987 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1228 06:57:30.795458  270987 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1228 06:57:30.802735  270987 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1228 06:57:30.809751  270987 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1228 06:57:30.809788  270987 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1228 06:57:30.816643  270987 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1228 06:57:30.823688  270987 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1228 06:57:30.823727  270987 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1228 06:57:30.830609  270987 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1228 06:57:30.837629  270987 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1228 06:57:30.837665  270987 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1228 06:57:30.844490  270987 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1228 06:57:30.881554  270987 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1228 06:57:30.881622  270987 kubeadm.go:319] [preflight] Running pre-flight checks
	I1228 06:57:30.944791  270987 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1228 06:57:30.944862  270987 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1228 06:57:30.944905  270987 kubeadm.go:319] OS: Linux
	I1228 06:57:30.944984  270987 kubeadm.go:319] CGROUPS_CPU: enabled
	I1228 06:57:30.945077  270987 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1228 06:57:30.945147  270987 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1228 06:57:30.945228  270987 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1228 06:57:30.945296  270987 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1228 06:57:30.945361  270987 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1228 06:57:30.945425  270987 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1228 06:57:30.945497  270987 kubeadm.go:319] CGROUPS_IO: enabled
	I1228 06:57:31.003363  270987 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1228 06:57:31.003533  270987 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1228 06:57:31.003696  270987 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1228 06:57:31.010847  270987 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1228 06:57:27.671633  261568 pod_ready.go:104] pod "coredns-7d764666f9-9glh9" is not "Ready", error: <nil>
	W1228 06:57:30.169012  261568 pod_ready.go:104] pod "coredns-7d764666f9-9glh9" is not "Ready", error: <nil>
	I1228 06:57:31.013041  270987 out.go:252]   - Generating certificates and keys ...
	I1228 06:57:31.013115  270987 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1228 06:57:31.013217  270987 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1228 06:57:31.076574  270987 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1228 06:57:31.152713  270987 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1228 06:57:31.254971  270987 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1228 06:57:31.392250  270987 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1228 06:57:31.425796  270987 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1228 06:57:31.426013  270987 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-610916 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1228 06:57:31.498834  270987 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1228 06:57:31.499021  270987 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-610916 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1228 06:57:31.603128  270987 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1228 06:57:31.639904  270987 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1228 06:57:31.740867  270987 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1228 06:57:31.740970  270987 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1228 06:57:31.873393  270987 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1228 06:57:31.907883  270987 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1228 06:57:31.989991  270987 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1228 06:57:32.036264  270987 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1228 06:57:32.114999  270987 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1228 06:57:32.115482  270987 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1228 06:57:32.121217  270987 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1228 06:57:27.822983  272455 out.go:252] * Restarting existing docker container for "newest-cni-479871" ...
	I1228 06:57:27.823078  272455 cli_runner.go:164] Run: docker start newest-cni-479871
	I1228 06:57:28.077460  272455 cli_runner.go:164] Run: docker container inspect newest-cni-479871 --format={{.State.Status}}
	I1228 06:57:28.098743  272455 kic.go:430] container "newest-cni-479871" state is running.
	I1228 06:57:28.099228  272455 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-479871
	I1228 06:57:28.120367  272455 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/config.json ...
	I1228 06:57:28.120589  272455 machine.go:94] provisionDockerMachine start ...
	I1228 06:57:28.120661  272455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:57:28.143624  272455 main.go:144] libmachine: Using SSH client type: native
	I1228 06:57:28.143953  272455 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1228 06:57:28.143971  272455 main.go:144] libmachine: About to run SSH command:
	hostname
	I1228 06:57:28.144632  272455 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50026->127.0.0.1:33103: read: connection reset by peer
	I1228 06:57:31.270069  272455 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-479871
	
	I1228 06:57:31.270100  272455 ubuntu.go:182] provisioning hostname "newest-cni-479871"
	I1228 06:57:31.270163  272455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:57:31.289699  272455 main.go:144] libmachine: Using SSH client type: native
	I1228 06:57:31.289926  272455 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1228 06:57:31.289941  272455 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-479871 && echo "newest-cni-479871" | sudo tee /etc/hostname
	I1228 06:57:31.426879  272455 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-479871
	
	I1228 06:57:31.426967  272455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:57:31.445496  272455 main.go:144] libmachine: Using SSH client type: native
	I1228 06:57:31.445747  272455 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1228 06:57:31.445765  272455 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-479871' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-479871/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-479871' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1228 06:57:31.567877  272455 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1228 06:57:31.567905  272455 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22352-5550/.minikube CaCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22352-5550/.minikube}
	I1228 06:57:31.567941  272455 ubuntu.go:190] setting up certificates
	I1228 06:57:31.567959  272455 provision.go:84] configureAuth start
	I1228 06:57:31.568049  272455 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-479871
	I1228 06:57:31.588447  272455 provision.go:143] copyHostCerts
	I1228 06:57:31.588510  272455 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem, removing ...
	I1228 06:57:31.588533  272455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem
	I1228 06:57:31.588582  272455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/cert.pem (1123 bytes)
	I1228 06:57:31.588665  272455 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem, removing ...
	I1228 06:57:31.588674  272455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem
	I1228 06:57:31.588700  272455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/key.pem (1679 bytes)
	I1228 06:57:31.588751  272455 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem, removing ...
	I1228 06:57:31.588758  272455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem
	I1228 06:57:31.588780  272455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22352-5550/.minikube/ca.pem (1082 bytes)
	I1228 06:57:31.588830  272455 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem org=jenkins.newest-cni-479871 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-479871]
	I1228 06:57:31.739310  272455 provision.go:177] copyRemoteCerts
	I1228 06:57:31.739389  272455 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1228 06:57:31.739435  272455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:57:31.758636  272455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/newest-cni-479871/id_rsa Username:docker}
	I1228 06:57:31.850505  272455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1228 06:57:31.868636  272455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1228 06:57:31.887288  272455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1228 06:57:31.904303  272455 provision.go:87] duration metric: took 336.319162ms to configureAuth
	I1228 06:57:31.904329  272455 ubuntu.go:206] setting minikube options for container-runtime
	I1228 06:57:31.904536  272455 config.go:182] Loaded profile config "newest-cni-479871": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:57:31.904641  272455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:57:31.922743  272455 main.go:144] libmachine: Using SSH client type: native
	I1228 06:57:31.922960  272455 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1228 06:57:31.922975  272455 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1228 06:57:32.255603  272455 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1228 06:57:32.255627  272455 machine.go:97] duration metric: took 4.135020669s to provisionDockerMachine
	I1228 06:57:32.255641  272455 start.go:293] postStartSetup for "newest-cni-479871" (driver="docker")
	I1228 06:57:32.255654  272455 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1228 06:57:32.255713  272455 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1228 06:57:32.255760  272455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:57:32.276512  272455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/newest-cni-479871/id_rsa Username:docker}
	I1228 06:57:32.368440  272455 ssh_runner.go:195] Run: cat /etc/os-release
	I1228 06:57:32.372197  272455 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1228 06:57:32.372226  272455 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1228 06:57:32.372237  272455 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-5550/.minikube/addons for local assets ...
	I1228 06:57:32.372290  272455 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-5550/.minikube/files for local assets ...
	I1228 06:57:32.372375  272455 filesync.go:149] local asset: /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem -> 90762.pem in /etc/ssl/certs
	I1228 06:57:32.372463  272455 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1228 06:57:32.381338  272455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem --> /etc/ssl/certs/90762.pem (1708 bytes)
	I1228 06:57:32.401589  272455 start.go:296] duration metric: took 145.932701ms for postStartSetup
	I1228 06:57:32.401683  272455 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 06:57:32.401737  272455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:57:32.422931  272455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/newest-cni-479871/id_rsa Username:docker}
	I1228 06:57:32.512401  272455 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1228 06:57:32.517023  272455 fix.go:56] duration metric: took 4.7153984s for fixHost
	I1228 06:57:32.517069  272455 start.go:83] releasing machines lock for "newest-cni-479871", held for 4.715464471s
	I1228 06:57:32.517153  272455 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-479871
	I1228 06:57:32.535177  272455 ssh_runner.go:195] Run: cat /version.json
	I1228 06:57:32.535227  272455 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1228 06:57:32.535244  272455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:57:32.535318  272455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:57:32.554418  272455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/newest-cni-479871/id_rsa Username:docker}
	I1228 06:57:32.554751  272455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/newest-cni-479871/id_rsa Username:docker}
	I1228 06:57:32.645792  272455 ssh_runner.go:195] Run: systemctl --version
	I1228 06:57:32.700940  272455 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1228 06:57:32.734908  272455 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1228 06:57:32.739648  272455 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1228 06:57:32.739716  272455 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1228 06:57:32.747404  272455 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1228 06:57:32.747422  272455 start.go:496] detecting cgroup driver to use...
	I1228 06:57:32.747453  272455 detect.go:190] detected "systemd" cgroup driver on host os
	I1228 06:57:32.747508  272455 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1228 06:57:32.762371  272455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1228 06:57:32.776108  272455 docker.go:218] disabling cri-docker service (if available) ...
	I1228 06:57:32.776178  272455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1228 06:57:32.790257  272455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1228 06:57:32.801901  272455 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1228 06:57:32.882973  272455 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1228 06:57:32.963675  272455 docker.go:234] disabling docker service ...
	I1228 06:57:32.963729  272455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1228 06:57:32.978111  272455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1228 06:57:32.990320  272455 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1228 06:57:33.068597  272455 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1228 06:57:33.166992  272455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1228 06:57:33.185863  272455 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 06:57:33.206306  272455 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1228 06:57:33.206387  272455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:57:33.217732  272455 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1228 06:57:33.217804  272455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:57:33.228626  272455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:57:33.240270  272455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:57:33.251994  272455 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1228 06:57:33.260712  272455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:57:33.269769  272455 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:57:33.278664  272455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1228 06:57:33.287267  272455 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1228 06:57:33.294325  272455 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1228 06:57:33.301312  272455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:57:33.390081  272455 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1228 06:57:33.553715  272455 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1228 06:57:33.553780  272455 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1228 06:57:33.557765  272455 start.go:574] Will wait 60s for crictl version
	I1228 06:57:33.557821  272455 ssh_runner.go:195] Run: which crictl
	I1228 06:57:33.561315  272455 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1228 06:57:33.586144  272455 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1228 06:57:33.586231  272455 ssh_runner.go:195] Run: crio --version
	I1228 06:57:33.615740  272455 ssh_runner.go:195] Run: crio --version
	I1228 06:57:33.657346  272455 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1228 06:57:33.658662  272455 cli_runner.go:164] Run: docker network inspect newest-cni-479871 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 06:57:33.680168  272455 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1228 06:57:33.684562  272455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 06:57:33.696730  272455 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1228 06:57:32.122847  270987 out.go:252]   - Booting up control plane ...
	I1228 06:57:32.122976  270987 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1228 06:57:32.123098  270987 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1228 06:57:32.123841  270987 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1228 06:57:32.150283  270987 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1228 06:57:32.150450  270987 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1228 06:57:32.157224  270987 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1228 06:57:32.157429  270987 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1228 06:57:32.157489  270987 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1228 06:57:32.266779  270987 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1228 06:57:32.266950  270987 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1228 06:57:32.768514  270987 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.843367ms
	I1228 06:57:32.773307  270987 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1228 06:57:32.773443  270987 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1228 06:57:32.773570  270987 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1228 06:57:32.773668  270987 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1228 06:57:33.777587  270987 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004224014s
	W1228 06:57:30.634896  260283 pod_ready.go:104] pod "coredns-7d764666f9-dmhdv" is not "Ready", error: <nil>
	W1228 06:57:32.635515  260283 pod_ready.go:104] pod "coredns-7d764666f9-dmhdv" is not "Ready", error: <nil>
	I1228 06:57:33.697864  272455 kubeadm.go:884] updating cluster {Name:newest-cni-479871 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-479871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 06:57:33.698023  272455 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1228 06:57:33.698207  272455 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 06:57:33.738334  272455 crio.go:631] all images are preloaded for cri-o runtime.
	I1228 06:57:33.738354  272455 crio.go:503] Images already preloaded, skipping extraction
	I1228 06:57:33.738394  272455 ssh_runner.go:195] Run: sudo crictl images --output json
	I1228 06:57:33.769247  272455 crio.go:631] all images are preloaded for cri-o runtime.
	I1228 06:57:33.769274  272455 cache_images.go:86] Images are preloaded, skipping loading
	I1228 06:57:33.769284  272455 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I1228 06:57:33.769403  272455 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-479871 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-479871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 06:57:33.769491  272455 ssh_runner.go:195] Run: crio config
	I1228 06:57:33.819410  272455 cni.go:84] Creating CNI manager for ""
	I1228 06:57:33.819443  272455 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1228 06:57:33.819463  272455 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1228 06:57:33.819495  272455 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-479871 NodeName:newest-cni-479871 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 06:57:33.819646  272455 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-479871"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 06:57:33.819717  272455 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1228 06:57:33.829016  272455 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 06:57:33.829101  272455 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 06:57:33.837816  272455 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1228 06:57:33.853961  272455 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 06:57:33.868483  272455 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1228 06:57:33.882729  272455 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1228 06:57:33.886671  272455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 06:57:33.897459  272455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:57:33.989500  272455 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:57:34.014562  272455 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871 for IP: 192.168.85.2
	I1228 06:57:34.014584  272455 certs.go:195] generating shared ca certs ...
	I1228 06:57:34.014601  272455 certs.go:227] acquiring lock for ca certs: {Name:mk77ee411d20e2d367f536371cb4debf1ce5f664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:57:34.014768  272455 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-5550/.minikube/ca.key
	I1228 06:57:34.014823  272455 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.key
	I1228 06:57:34.014837  272455 certs.go:257] generating profile certs ...
	I1228 06:57:34.014938  272455 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/client.key
	I1228 06:57:34.015009  272455 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/apiserver.key.37bd9581
	I1228 06:57:34.015080  272455 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/proxy-client.key
	I1228 06:57:34.015244  272455 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076.pem (1338 bytes)
	W1228 06:57:34.015289  272455 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076_empty.pem, impossibly tiny 0 bytes
	I1228 06:57:34.015304  272455 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca-key.pem (1675 bytes)
	I1228 06:57:34.015342  272455 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem (1082 bytes)
	I1228 06:57:34.015381  272455 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem (1123 bytes)
	I1228 06:57:34.015416  272455 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/certs/key.pem (1679 bytes)
	I1228 06:57:34.015484  272455 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem (1708 bytes)
	I1228 06:57:34.016185  272455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 06:57:34.037890  272455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1228 06:57:34.058760  272455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 06:57:34.081701  272455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 06:57:34.108478  272455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1228 06:57:34.130660  272455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1228 06:57:34.152216  272455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 06:57:34.170684  272455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/newest-cni-479871/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1228 06:57:34.189183  272455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 06:57:34.206606  272455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/certs/9076.pem --> /usr/share/ca-certificates/9076.pem (1338 bytes)
	I1228 06:57:34.229125  272455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/ssl/certs/90762.pem --> /usr/share/ca-certificates/90762.pem (1708 bytes)
	I1228 06:57:34.246672  272455 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 06:57:34.259394  272455 ssh_runner.go:195] Run: openssl version
	I1228 06:57:34.265808  272455 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:57:34.274413  272455 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 06:57:34.282379  272455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:57:34.286174  272455 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:28 /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:57:34.286234  272455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 06:57:34.327279  272455 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 06:57:34.335837  272455 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9076.pem
	I1228 06:57:34.344221  272455 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9076.pem /etc/ssl/certs/9076.pem
	I1228 06:57:34.352006  272455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9076.pem
	I1228 06:57:34.355742  272455 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:31 /usr/share/ca-certificates/9076.pem
	I1228 06:57:34.355798  272455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9076.pem
	I1228 06:57:34.390265  272455 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 06:57:34.398050  272455 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/90762.pem
	I1228 06:57:34.405829  272455 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/90762.pem /etc/ssl/certs/90762.pem
	I1228 06:57:34.414747  272455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/90762.pem
	I1228 06:57:34.425769  272455 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:31 /usr/share/ca-certificates/90762.pem
	I1228 06:57:34.425834  272455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/90762.pem
	I1228 06:57:34.461046  272455 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 06:57:34.469275  272455 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 06:57:34.477451  272455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1228 06:57:34.528430  272455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1228 06:57:34.593309  272455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1228 06:57:34.652783  272455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1228 06:57:34.711443  272455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1228 06:57:34.777093  272455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1228 06:57:34.821043  272455 kubeadm.go:401] StartCluster: {Name:newest-cni-479871 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-479871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:57:34.821189  272455 ssh_runner.go:195] Run: sudo crio config
	I1228 06:57:34.873329  272455 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	W1228 06:57:34.885432  272455 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:34Z" level=error msg="open /run/runc: no such file or directory"
	I1228 06:57:34.885501  272455 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 06:57:34.893258  272455 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1228 06:57:34.893278  272455 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1228 06:57:34.893327  272455 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1228 06:57:34.901362  272455 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1228 06:57:34.902470  272455 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-479871" does not appear in /home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:57:34.902987  272455 kubeconfig.go:62] /home/jenkins/minikube-integration/22352-5550/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-479871" cluster setting kubeconfig missing "newest-cni-479871" context setting]
	I1228 06:57:34.903790  272455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/kubeconfig: {Name:mke0b45a06b272ee5aa493b62cdbe6f2a53c0aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:57:34.905714  272455 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1228 06:57:34.913890  272455 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1228 06:57:34.913925  272455 kubeadm.go:602] duration metric: took 20.639201ms to restartPrimaryControlPlane
	I1228 06:57:34.913951  272455 kubeadm.go:403] duration metric: took 92.928236ms to StartCluster
	I1228 06:57:34.913968  272455 settings.go:142] acquiring lock: {Name:mk84c1d4c127eaf11c7cc5cc16de86f86f2dcbe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:57:34.914077  272455 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:57:34.916170  272455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/kubeconfig: {Name:mke0b45a06b272ee5aa493b62cdbe6f2a53c0aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:57:34.916443  272455 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1228 06:57:34.916550  272455 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1228 06:57:34.916640  272455 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-479871"
	I1228 06:57:34.916658  272455 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-479871"
	I1228 06:57:34.916665  272455 config.go:182] Loaded profile config "newest-cni-479871": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:57:34.916676  272455 addons.go:70] Setting dashboard=true in profile "newest-cni-479871"
	I1228 06:57:34.916688  272455 addons.go:239] Setting addon dashboard=true in "newest-cni-479871"
	W1228 06:57:34.916695  272455 addons.go:248] addon dashboard should already be in state true
	W1228 06:57:34.916672  272455 addons.go:248] addon storage-provisioner should already be in state true
	I1228 06:57:34.916723  272455 host.go:66] Checking if "newest-cni-479871" exists ...
	I1228 06:57:34.916726  272455 host.go:66] Checking if "newest-cni-479871" exists ...
	I1228 06:57:34.916739  272455 addons.go:70] Setting default-storageclass=true in profile "newest-cni-479871"
	I1228 06:57:34.916767  272455 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-479871"
	I1228 06:57:34.917088  272455 cli_runner.go:164] Run: docker container inspect newest-cni-479871 --format={{.State.Status}}
	I1228 06:57:34.917213  272455 cli_runner.go:164] Run: docker container inspect newest-cni-479871 --format={{.State.Status}}
	I1228 06:57:34.917259  272455 cli_runner.go:164] Run: docker container inspect newest-cni-479871 --format={{.State.Status}}
	I1228 06:57:34.920009  272455 out.go:179] * Verifying Kubernetes components...
	I1228 06:57:34.921213  272455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:57:34.945670  272455 addons.go:239] Setting addon default-storageclass=true in "newest-cni-479871"
	W1228 06:57:34.945695  272455 addons.go:248] addon default-storageclass should already be in state true
	I1228 06:57:34.945722  272455 host.go:66] Checking if "newest-cni-479871" exists ...
	I1228 06:57:34.945994  272455 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1228 06:57:34.946234  272455 cli_runner.go:164] Run: docker container inspect newest-cni-479871 --format={{.State.Status}}
	I1228 06:57:34.948110  272455 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1228 06:57:34.949115  272455 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1228 06:57:32.668877  261568 pod_ready.go:104] pod "coredns-7d764666f9-9glh9" is not "Ready", error: <nil>
	W1228 06:57:34.672814  261568 pod_ready.go:104] pod "coredns-7d764666f9-9glh9" is not "Ready", error: <nil>
	I1228 06:57:34.631300  270987 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.857004545s
	I1228 06:57:36.280888  270987 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.503303433s
	I1228 06:57:36.297545  270987 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1228 06:57:36.307770  270987 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1228 06:57:36.317629  270987 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1228 06:57:36.317908  270987 kubeadm.go:319] [mark-control-plane] Marking the node auto-610916 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1228 06:57:36.325908  270987 kubeadm.go:319] [bootstrap-token] Using token: x4upak.rw4kta5bbgc527cy
	I1228 06:57:34.949173  272455 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1228 06:57:34.949184  272455 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1228 06:57:34.949246  272455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:57:34.950237  272455 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:57:34.950283  272455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1228 06:57:34.950337  272455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:57:34.977411  272455 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1228 06:57:34.977508  272455 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1228 06:57:34.977616  272455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-479871
	I1228 06:57:34.981507  272455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/newest-cni-479871/id_rsa Username:docker}
	I1228 06:57:34.987263  272455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/newest-cni-479871/id_rsa Username:docker}
	I1228 06:57:35.003464  272455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/newest-cni-479871/id_rsa Username:docker}
	I1228 06:57:35.061652  272455 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:57:35.075263  272455 api_server.go:52] waiting for apiserver process to appear ...
	I1228 06:57:35.075340  272455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 06:57:35.087464  272455 api_server.go:72] duration metric: took 170.983918ms to wait for apiserver process to appear ...
	I1228 06:57:35.087486  272455 api_server.go:88] waiting for apiserver healthz status ...
	I1228 06:57:35.087500  272455 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1228 06:57:35.088882  272455 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1228 06:57:35.088902  272455 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1228 06:57:35.093066  272455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:57:35.102903  272455 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1228 06:57:35.102928  272455 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1228 06:57:35.106367  272455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1228 06:57:35.116707  272455 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1228 06:57:35.116730  272455 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1228 06:57:35.136817  272455 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1228 06:57:35.136850  272455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1228 06:57:35.151009  272455 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1228 06:57:35.151157  272455 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1228 06:57:35.165198  272455 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1228 06:57:35.165226  272455 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1228 06:57:35.178269  272455 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1228 06:57:35.178288  272455 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1228 06:57:35.190857  272455 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1228 06:57:35.190880  272455 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1228 06:57:35.203155  272455 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 06:57:35.203179  272455 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1228 06:57:35.215454  272455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1228 06:57:36.608985  272455 api_server.go:325] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1228 06:57:36.609016  272455 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1228 06:57:36.609043  272455 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1228 06:57:36.697101  272455 api_server.go:325] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1228 06:57:36.697130  272455 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1228 06:57:37.087956  272455 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1228 06:57:37.092692  272455 api_server.go:325] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1228 06:57:37.092723  272455 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1228 06:57:37.261271  272455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.168167788s)
	I1228 06:57:37.261333  272455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.154938902s)
	I1228 06:57:37.261418  272455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.045932944s)
	I1228 06:57:37.263061  272455 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-479871 addons enable metrics-server
	
	I1228 06:57:37.272444  272455 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1228 06:57:37.273681  272455 addons.go:530] duration metric: took 2.35713092s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1228 06:57:37.587612  272455 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1228 06:57:37.591761  272455 api_server.go:325] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1228 06:57:37.591792  272455 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1228 06:57:36.327282  270987 out.go:252]   - Configuring RBAC rules ...
	I1228 06:57:36.327461  270987 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1228 06:57:36.330272  270987 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1228 06:57:36.336284  270987 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1228 06:57:36.338488  270987 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1228 06:57:36.341092  270987 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1228 06:57:36.343490  270987 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1228 06:57:36.692514  270987 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1228 06:57:37.105285  270987 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1228 06:57:37.683722  270987 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1228 06:57:37.685396  270987 kubeadm.go:319] 
	I1228 06:57:37.685528  270987 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1228 06:57:37.685536  270987 kubeadm.go:319] 
	I1228 06:57:37.685622  270987 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1228 06:57:37.685628  270987 kubeadm.go:319] 
	I1228 06:57:37.685658  270987 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1228 06:57:37.685724  270987 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1228 06:57:37.685783  270987 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1228 06:57:37.685795  270987 kubeadm.go:319] 
	I1228 06:57:37.685865  270987 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1228 06:57:37.685871  270987 kubeadm.go:319] 
	I1228 06:57:37.685926  270987 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1228 06:57:37.685932  270987 kubeadm.go:319] 
	I1228 06:57:37.685991  270987 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1228 06:57:37.686144  270987 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1228 06:57:37.686241  270987 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1228 06:57:37.686261  270987 kubeadm.go:319] 
	I1228 06:57:37.686360  270987 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1228 06:57:37.686451  270987 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1228 06:57:37.686457  270987 kubeadm.go:319] 
	I1228 06:57:37.686551  270987 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token x4upak.rw4kta5bbgc527cy \
	I1228 06:57:37.686668  270987 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6534497fd09654e1c9f62bf7a6763f446292593a08619861d4eab5a65759d2d4 \
	I1228 06:57:37.686696  270987 kubeadm.go:319] 	--control-plane 
	I1228 06:57:37.686702  270987 kubeadm.go:319] 
	I1228 06:57:37.686802  270987 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1228 06:57:37.686808  270987 kubeadm.go:319] 
	I1228 06:57:37.686907  270987 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token x4upak.rw4kta5bbgc527cy \
	I1228 06:57:37.687052  270987 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6534497fd09654e1c9f62bf7a6763f446292593a08619861d4eab5a65759d2d4 
	I1228 06:57:37.690764  270987 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1228 06:57:37.690909  270987 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1228 06:57:37.690947  270987 cni.go:84] Creating CNI manager for ""
	I1228 06:57:37.690960  270987 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1228 06:57:37.695299  270987 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1228 06:57:38.088255  272455 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1228 06:57:38.092765  272455 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1228 06:57:38.093731  272455 api_server.go:141] control plane version: v1.35.0
	I1228 06:57:38.093757  272455 api_server.go:131] duration metric: took 3.006264561s to wait for apiserver health ...
	I1228 06:57:38.093767  272455 system_pods.go:43] waiting for kube-system pods to appear ...
	I1228 06:57:38.097618  272455 system_pods.go:59] 8 kube-system pods found
	I1228 06:57:38.097656  272455 system_pods.go:61] "coredns-7d764666f9-cqtm4" [80bee88e-62a5-413c-9e2b-0cc274cf605d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1228 06:57:38.097679  272455 system_pods.go:61] "etcd-newest-cni-479871" [8bb011cd-dd9f-4176-b43a-5629132fbf66] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1228 06:57:38.097698  272455 system_pods.go:61] "kindnet-74fnf" [f610ca19-f52f-41ef-90d7-6ae6b47445da] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1228 06:57:38.097713  272455 system_pods.go:61] "kube-apiserver-newest-cni-479871" [a83949b2-d4ff-40cb-b0de-d4ba8547a489] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1228 06:57:38.097734  272455 system_pods.go:61] "kube-controller-manager-newest-cni-479871" [018c9a7d-7992-49db-afd0-8acc014b1976] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1228 06:57:38.097765  272455 system_pods.go:61] "kube-proxy-kzkbr" [a72ff074-7d43-4ea4-b42a-3a8e5e5fea1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1228 06:57:38.097774  272455 system_pods.go:61] "kube-scheduler-newest-cni-479871" [85dcc815-30f1-4c70-a83a-08ca392957f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1228 06:57:38.097782  272455 system_pods.go:61] "storage-provisioner" [267e9641-510e-4fac-a7f3-97501d5ada65] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1228 06:57:38.097791  272455 system_pods.go:74] duration metric: took 4.01547ms to wait for pod list to return data ...
	I1228 06:57:38.097805  272455 default_sa.go:34] waiting for default service account to be created ...
	I1228 06:57:38.100621  272455 default_sa.go:45] found service account: "default"
	I1228 06:57:38.100648  272455 default_sa.go:55] duration metric: took 2.834305ms for default service account to be created ...
	I1228 06:57:38.100670  272455 kubeadm.go:587] duration metric: took 3.184197442s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1228 06:57:38.100695  272455 node_conditions.go:102] verifying NodePressure condition ...
	I1228 06:57:38.103420  272455 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1228 06:57:38.103447  272455 node_conditions.go:123] node cpu capacity is 8
	I1228 06:57:38.103476  272455 node_conditions.go:105] duration metric: took 2.773785ms to run NodePressure ...
	I1228 06:57:38.103493  272455 start.go:242] waiting for startup goroutines ...
	I1228 06:57:38.103506  272455 start.go:247] waiting for cluster config update ...
	I1228 06:57:38.103520  272455 start.go:256] writing updated cluster config ...
	I1228 06:57:38.103880  272455 ssh_runner.go:195] Run: rm -f paused
	I1228 06:57:38.164632  272455 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1228 06:57:38.166186  272455 out.go:179] * Done! kubectl is now configured to use "newest-cni-479871" cluster and "default" namespace by default
	I1228 06:57:37.696615  270987 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1228 06:57:37.704918  270987 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1228 06:57:37.704940  270987 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1228 06:57:37.721562  270987 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1228 06:57:37.966215  270987 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1228 06:57:37.966290  270987 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:57:37.966290  270987 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-610916 minikube.k8s.io/updated_at=2025_12_28T06_57_37_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba minikube.k8s.io/name=auto-610916 minikube.k8s.io/primary=true
	I1228 06:57:37.975864  270987 ops.go:34] apiserver oom_adj: -16
	I1228 06:57:38.044350  270987 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:57:38.545015  270987 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1228 06:57:34.637977  260283 pod_ready.go:104] pod "coredns-7d764666f9-dmhdv" is not "Ready", error: <nil>
	W1228 06:57:36.638948  260283 pod_ready.go:104] pod "coredns-7d764666f9-dmhdv" is not "Ready", error: <nil>
	I1228 06:57:38.134693  260283 pod_ready.go:94] pod "coredns-7d764666f9-dmhdv" is "Ready"
	I1228 06:57:38.134723  260283 pod_ready.go:86] duration metric: took 38.005325036s for pod "coredns-7d764666f9-dmhdv" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:38.137194  260283 pod_ready.go:83] waiting for pod "etcd-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:38.141168  260283 pod_ready.go:94] pod "etcd-embed-certs-422591" is "Ready"
	I1228 06:57:38.141190  260283 pod_ready.go:86] duration metric: took 3.972263ms for pod "etcd-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:38.143197  260283 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:38.147889  260283 pod_ready.go:94] pod "kube-apiserver-embed-certs-422591" is "Ready"
	I1228 06:57:38.147949  260283 pod_ready.go:86] duration metric: took 4.729431ms for pod "kube-apiserver-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:38.150276  260283 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:38.333546  260283 pod_ready.go:94] pod "kube-controller-manager-embed-certs-422591" is "Ready"
	I1228 06:57:38.333571  260283 pod_ready.go:86] duration metric: took 183.273399ms for pod "kube-controller-manager-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:38.533116  260283 pod_ready.go:83] waiting for pod "kube-proxy-j2dkd" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:38.932854  260283 pod_ready.go:94] pod "kube-proxy-j2dkd" is "Ready"
	I1228 06:57:38.932876  260283 pod_ready.go:86] duration metric: took 399.738895ms for pod "kube-proxy-j2dkd" in "kube-system" namespace to be "Ready" or be gone ...
	W1228 06:57:37.169010  261568 pod_ready.go:104] pod "coredns-7d764666f9-9glh9" is not "Ready", error: <nil>
	I1228 06:57:37.669399  261568 pod_ready.go:94] pod "coredns-7d764666f9-9glh9" is "Ready"
	I1228 06:57:37.669431  261568 pod_ready.go:86] duration metric: took 35.50629669s for pod "coredns-7d764666f9-9glh9" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:37.672232  261568 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:37.676380  261568 pod_ready.go:94] pod "etcd-default-k8s-diff-port-500581" is "Ready"
	I1228 06:57:37.676406  261568 pod_ready.go:86] duration metric: took 4.14854ms for pod "etcd-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:37.678455  261568 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:37.682902  261568 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-500581" is "Ready"
	I1228 06:57:37.682927  261568 pod_ready.go:86] duration metric: took 4.444305ms for pod "kube-apiserver-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:37.685195  261568 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:37.867556  261568 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-500581" is "Ready"
	I1228 06:57:37.867590  261568 pod_ready.go:86] duration metric: took 182.371857ms for pod "kube-controller-manager-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:38.067941  261568 pod_ready.go:83] waiting for pod "kube-proxy-95gmh" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:38.467758  261568 pod_ready.go:94] pod "kube-proxy-95gmh" is "Ready"
	I1228 06:57:38.467785  261568 pod_ready.go:86] duration metric: took 399.816464ms for pod "kube-proxy-95gmh" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:38.666849  261568 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:39.067582  261568 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-500581" is "Ready"
	I1228 06:57:39.067613  261568 pod_ready.go:86] duration metric: took 400.734128ms for pod "kube-scheduler-default-k8s-diff-port-500581" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:39.067630  261568 pod_ready.go:40] duration metric: took 36.909076713s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 06:57:39.121498  261568 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1228 06:57:39.122908  261568 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-500581" cluster and "default" namespace by default
	I1228 06:57:39.133792  260283 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:39.532725  260283 pod_ready.go:94] pod "kube-scheduler-embed-certs-422591" is "Ready"
	I1228 06:57:39.532756  260283 pod_ready.go:86] duration metric: took 398.942325ms for pod "kube-scheduler-embed-certs-422591" in "kube-system" namespace to be "Ready" or be gone ...
	I1228 06:57:39.532771  260283 pod_ready.go:40] duration metric: took 39.407784586s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1228 06:57:39.588361  260283 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1228 06:57:39.590050  260283 out.go:179] * Done! kubectl is now configured to use "embed-certs-422591" cluster and "default" namespace by default
	I1228 06:57:39.045408  270987 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:57:39.545239  270987 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:57:40.044591  270987 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:57:40.544765  270987 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:57:41.045209  270987 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:57:41.544469  270987 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:57:42.045446  270987 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:57:42.544902  270987 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1228 06:57:42.634292  270987 kubeadm.go:1114] duration metric: took 4.668065903s to wait for elevateKubeSystemPrivileges
	I1228 06:57:42.634326  270987 kubeadm.go:403] duration metric: took 11.927692123s to StartCluster
	I1228 06:57:42.634346  270987 settings.go:142] acquiring lock: {Name:mk84c1d4c127eaf11c7cc5cc16de86f86f2dcbe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:57:42.634407  270987 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:57:42.637124  270987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/kubeconfig: {Name:mke0b45a06b272ee5aa493b62cdbe6f2a53c0aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:57:42.637390  270987 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1228 06:57:42.637521  270987 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1228 06:57:42.637547  270987 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1228 06:57:42.637641  270987 addons.go:70] Setting storage-provisioner=true in profile "auto-610916"
	I1228 06:57:42.637696  270987 addons.go:70] Setting default-storageclass=true in profile "auto-610916"
	I1228 06:57:42.637717  270987 addons.go:239] Setting addon storage-provisioner=true in "auto-610916"
	I1228 06:57:42.637720  270987 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-610916"
	I1228 06:57:42.637763  270987 host.go:66] Checking if "auto-610916" exists ...
	I1228 06:57:42.637812  270987 config.go:182] Loaded profile config "auto-610916": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:57:42.638109  270987 cli_runner.go:164] Run: docker container inspect auto-610916 --format={{.State.Status}}
	I1228 06:57:42.638313  270987 cli_runner.go:164] Run: docker container inspect auto-610916 --format={{.State.Status}}
	I1228 06:57:42.639407  270987 out.go:179] * Verifying Kubernetes components...
	I1228 06:57:42.640689  270987 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 06:57:42.668666  270987 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1228 06:57:42.670061  270987 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:57:42.670081  270987 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1228 06:57:42.670148  270987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-610916
	I1228 06:57:42.672566  270987 addons.go:239] Setting addon default-storageclass=true in "auto-610916"
	I1228 06:57:42.672602  270987 host.go:66] Checking if "auto-610916" exists ...
	I1228 06:57:42.672944  270987 cli_runner.go:164] Run: docker container inspect auto-610916 --format={{.State.Status}}
	I1228 06:57:42.704913  270987 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/auto-610916/id_rsa Username:docker}
	I1228 06:57:42.706211  270987 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1228 06:57:42.706248  270987 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1228 06:57:42.706337  270987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-610916
	I1228 06:57:42.734995  270987 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/auto-610916/id_rsa Username:docker}
	I1228 06:57:42.750964  270987 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1228 06:57:42.791247  270987 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 06:57:42.816719  270987 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1228 06:57:42.846328  270987 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1228 06:57:42.952767  270987 start.go:987] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1228 06:57:42.954601  270987 node_ready.go:35] waiting up to 15m0s for node "auto-610916" to be "Ready" ...
	I1228 06:57:43.182284  270987 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	
	
	==> CRI-O <==
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.404470269Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.407696544Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=89a44197-c21b-487e-9c8e-82c1e0e426c2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.408282858Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=1b354c58-729a-4a7d-a73e-7ade4846af6e name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.40912206Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.409732715Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.40999541Z" level=info msg="Ran pod sandbox 566249c473d84fa5225d39f78b4b4e5e4670840ac5b85a583e79394adf2ecb90 with infra container: kube-system/kube-proxy-kzkbr/POD" id=89a44197-c21b-487e-9c8e-82c1e0e426c2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.410409312Z" level=info msg="Ran pod sandbox da5ad34387ef1f0fc429e2a2c14273b245ba867a9f9d1e2569da280942fae5a0 with infra container: kube-system/kindnet-74fnf/POD" id=1b354c58-729a-4a7d-a73e-7ade4846af6e name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.410945912Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=ae404702-cde0-4e30-a0b6-5261be5da018 name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.411252605Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=aa1d84bc-8727-46dd-b809-b569f3ac6d04 name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.411854597Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=21e15870-c587-4911-b599-f71d112d901e name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.412113777Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=c2ba3029-3543-423d-aa1b-fca0c6033e7c name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.412788971Z" level=info msg="Creating container: kube-system/kube-proxy-kzkbr/kube-proxy" id=3242f498-7c1d-4014-8efb-48db9d073ed4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.412901497Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.413141534Z" level=info msg="Creating container: kube-system/kindnet-74fnf/kindnet-cni" id=286b9599-20a9-49b0-acc9-3585126af411 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.41322754Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.417593085Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.418103467Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.418456264Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.418957332Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.444644467Z" level=info msg="Created container bd147eb423bfd84a96a1a81a90e2399054e85de7afa31369083adca87be94792: kube-system/kindnet-74fnf/kindnet-cni" id=286b9599-20a9-49b0-acc9-3585126af411 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.445192368Z" level=info msg="Starting container: bd147eb423bfd84a96a1a81a90e2399054e85de7afa31369083adca87be94792" id=db25a680-bd30-4174-84db-6502167711d9 name=/runtime.v1.RuntimeService/StartContainer
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.446507439Z" level=info msg="Created container 39fbe283169d126c6b1aaeb893e40691f5c91d6c14186ae4e5010ff8f2d542fd: kube-system/kube-proxy-kzkbr/kube-proxy" id=3242f498-7c1d-4014-8efb-48db9d073ed4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.447095604Z" level=info msg="Starting container: 39fbe283169d126c6b1aaeb893e40691f5c91d6c14186ae4e5010ff8f2d542fd" id=efcd6508-da2e-4631-bf56-721c6701e82a name=/runtime.v1.RuntimeService/StartContainer
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.447391048Z" level=info msg="Started container" PID=1054 containerID=bd147eb423bfd84a96a1a81a90e2399054e85de7afa31369083adca87be94792 description=kube-system/kindnet-74fnf/kindnet-cni id=db25a680-bd30-4174-84db-6502167711d9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=da5ad34387ef1f0fc429e2a2c14273b245ba867a9f9d1e2569da280942fae5a0
	Dec 28 06:57:37 newest-cni-479871 crio[525]: time="2025-12-28T06:57:37.44978713Z" level=info msg="Started container" PID=1053 containerID=39fbe283169d126c6b1aaeb893e40691f5c91d6c14186ae4e5010ff8f2d542fd description=kube-system/kube-proxy-kzkbr/kube-proxy id=efcd6508-da2e-4631-bf56-721c6701e82a name=/runtime.v1.RuntimeService/StartContainer sandboxID=566249c473d84fa5225d39f78b4b4e5e4670840ac5b85a583e79394adf2ecb90
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	bd147eb423bfd       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251   7 seconds ago       Running             kindnet-cni               1                   da5ad34387ef1       kindnet-74fnf                               kube-system
	39fbe283169d1       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8   7 seconds ago       Running             kube-proxy                1                   566249c473d84       kube-proxy-kzkbr                            kube-system
	99b26115b080a       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508   10 seconds ago      Running             kube-controller-manager   1                   c8c2d2ed12122       kube-controller-manager-newest-cni-479871   kube-system
	ff6b6b4161634       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc   10 seconds ago      Running             kube-scheduler            1                   821adb7916bf1       kube-scheduler-newest-cni-479871            kube-system
	5b07da5a20ba2       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499   10 seconds ago      Running             kube-apiserver            1                   55b5466f5af79       kube-apiserver-newest-cni-479871            kube-system
	8184f33d790d9       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2   10 seconds ago      Running             etcd                      1                   935b57d6942e3       etcd-newest-cni-479871                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-479871
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-479871
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba
	                    minikube.k8s.io/name=newest-cni-479871
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_28T06_57_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Dec 2025 06:57:04 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-479871
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 28 Dec 2025 06:57:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 28 Dec 2025 06:57:36 +0000   Sun, 28 Dec 2025 06:57:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 28 Dec 2025 06:57:36 +0000   Sun, 28 Dec 2025 06:57:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 28 Dec 2025 06:57:36 +0000   Sun, 28 Dec 2025 06:57:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 28 Dec 2025 06:57:36 +0000   Sun, 28 Dec 2025 06:57:02 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-479871
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 493159aea3d8b8768b108b926950835d
	  System UUID:                c74e85f6-b22b-4d3f-a221-99d5faff29cc
	  Boot ID:                    e7a1d175-ccf2-4135-b9c7-3a9f70f4c4af
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-479871                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         38s
	  kube-system                 kindnet-74fnf                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      33s
	  kube-system                 kube-apiserver-newest-cni-479871             250m (3%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-controller-manager-newest-cni-479871    200m (2%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-proxy-kzkbr                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-scheduler-newest-cni-479871             100m (1%)     0 (0%)      0 (0%)           0 (0%)         38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  34s   node-controller  Node newest-cni-479871 event: Registered Node newest-cni-479871 in Controller
	  Normal  RegisteredNode  5s    node-controller  Node newest-cni-479871 event: Registered Node newest-cni-479871 in Controller
	
	
	==> dmesg <==
	[Dec28 06:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001811] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000998] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.386099] i8042: Warning: Keylock active
	[  +0.010472] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.485785] block sda: the capability attribute has been deprecated.
	[  +0.082391] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024584] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.071522] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 06:57:44 up 40 min,  0 user,  load average: 4.89, 3.29, 2.01
	Linux newest-cni-479871 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 28 06:57:36 newest-cni-479871 kubelet[674]: E1228 06:57:36.836675     674 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-479871\" already exists" pod="kube-system/kube-apiserver-newest-cni-479871"
	Dec 28 06:57:36 newest-cni-479871 kubelet[674]: I1228 06:57:36.836716     674 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-479871"
	Dec 28 06:57:36 newest-cni-479871 kubelet[674]: E1228 06:57:36.843284     674 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-479871\" already exists" pod="kube-system/kube-controller-manager-newest-cni-479871"
	Dec 28 06:57:36 newest-cni-479871 kubelet[674]: I1228 06:57:36.844419     674 kubelet_node_status.go:123] "Node was previously registered" node="newest-cni-479871"
	Dec 28 06:57:36 newest-cni-479871 kubelet[674]: I1228 06:57:36.844492     674 kubelet_node_status.go:77] "Successfully registered node" node="newest-cni-479871"
	Dec 28 06:57:36 newest-cni-479871 kubelet[674]: I1228 06:57:36.844518     674 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 28 06:57:36 newest-cni-479871 kubelet[674]: I1228 06:57:36.845383     674 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 28 06:57:37 newest-cni-479871 kubelet[674]: I1228 06:57:37.094638     674 apiserver.go:52] "Watching apiserver"
	Dec 28 06:57:37 newest-cni-479871 kubelet[674]: E1228 06:57:37.100792     674 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-479871" containerName="kube-controller-manager"
	Dec 28 06:57:37 newest-cni-479871 kubelet[674]: E1228 06:57:37.141236     674 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-479871" containerName="etcd"
	Dec 28 06:57:37 newest-cni-479871 kubelet[674]: I1228 06:57:37.141373     674 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-479871"
	Dec 28 06:57:37 newest-cni-479871 kubelet[674]: E1228 06:57:37.141715     674 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-479871" containerName="kube-apiserver"
	Dec 28 06:57:37 newest-cni-479871 kubelet[674]: E1228 06:57:37.146493     674 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-479871\" already exists" pod="kube-system/kube-scheduler-newest-cni-479871"
	Dec 28 06:57:37 newest-cni-479871 kubelet[674]: E1228 06:57:37.146622     674 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-479871" containerName="kube-scheduler"
	Dec 28 06:57:37 newest-cni-479871 kubelet[674]: I1228 06:57:37.198314     674 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 28 06:57:37 newest-cni-479871 kubelet[674]: I1228 06:57:37.273887     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a72ff074-7d43-4ea4-b42a-3a8e5e5fea1d-xtables-lock\") pod \"kube-proxy-kzkbr\" (UID: \"a72ff074-7d43-4ea4-b42a-3a8e5e5fea1d\") " pod="kube-system/kube-proxy-kzkbr"
	Dec 28 06:57:37 newest-cni-479871 kubelet[674]: I1228 06:57:37.273928     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f610ca19-f52f-41ef-90d7-6ae6b47445da-xtables-lock\") pod \"kindnet-74fnf\" (UID: \"f610ca19-f52f-41ef-90d7-6ae6b47445da\") " pod="kube-system/kindnet-74fnf"
	Dec 28 06:57:37 newest-cni-479871 kubelet[674]: I1228 06:57:37.274185     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a72ff074-7d43-4ea4-b42a-3a8e5e5fea1d-lib-modules\") pod \"kube-proxy-kzkbr\" (UID: \"a72ff074-7d43-4ea4-b42a-3a8e5e5fea1d\") " pod="kube-system/kube-proxy-kzkbr"
	Dec 28 06:57:37 newest-cni-479871 kubelet[674]: I1228 06:57:37.274241     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f610ca19-f52f-41ef-90d7-6ae6b47445da-lib-modules\") pod \"kindnet-74fnf\" (UID: \"f610ca19-f52f-41ef-90d7-6ae6b47445da\") " pod="kube-system/kindnet-74fnf"
	Dec 28 06:57:37 newest-cni-479871 kubelet[674]: I1228 06:57:37.274374     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f610ca19-f52f-41ef-90d7-6ae6b47445da-cni-cfg\") pod \"kindnet-74fnf\" (UID: \"f610ca19-f52f-41ef-90d7-6ae6b47445da\") " pod="kube-system/kindnet-74fnf"
	Dec 28 06:57:38 newest-cni-479871 kubelet[674]: E1228 06:57:38.148927     674 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-479871" containerName="kube-scheduler"
	Dec 28 06:57:39 newest-cni-479871 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 28 06:57:39 newest-cni-479871 kubelet[674]: I1228 06:57:39.160996     674 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 28 06:57:39 newest-cni-479871 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 28 06:57:39 newest-cni-479871 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1228 06:57:43.964425  277226 logs.go:279] Failed to list containers for "kube-apiserver": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:43Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:44.046592  277226 logs.go:279] Failed to list containers for "etcd": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:44Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:44.137106  277226 logs.go:279] Failed to list containers for "coredns": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:44Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:44.212866  277226 logs.go:279] Failed to list containers for "kube-scheduler": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:44Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:44.281856  277226 logs.go:279] Failed to list containers for "kube-proxy": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:44Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:44.350337  277226 logs.go:279] Failed to list containers for "kube-controller-manager": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:44Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:44.413154  277226 logs.go:279] Failed to list containers for "kindnet": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:44Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:44.473923  277226 logs.go:279] Failed to list containers for "storage-provisioner": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:44Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:44.535278  277226 logs.go:279] Failed to list containers for "kubernetes-dashboard": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:44Z" level=error msg="open /run/runc: no such file or directory"

                                                
                                                
** /stderr **
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-479871 -n newest-cni-479871
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-479871 -n newest-cni-479871: exit status 2 (334.725516ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-479871 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-cqtm4 storage-provisioner dashboard-metrics-scraper-867fb5f87b-xjkqj kubernetes-dashboard-b84665fb8-854wv
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-479871 describe pod coredns-7d764666f9-cqtm4 storage-provisioner dashboard-metrics-scraper-867fb5f87b-xjkqj kubernetes-dashboard-b84665fb8-854wv
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-479871 describe pod coredns-7d764666f9-cqtm4 storage-provisioner dashboard-metrics-scraper-867fb5f87b-xjkqj kubernetes-dashboard-b84665fb8-854wv: exit status 1 (61.814253ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-cqtm4" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-xjkqj" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-854wv" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-479871 describe pod coredns-7d764666f9-cqtm4 storage-provisioner dashboard-metrics-scraper-867fb5f87b-xjkqj kubernetes-dashboard-b84665fb8-854wv: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (5.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-500581 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-500581 --alsologtostderr -v=1: exit status 80 (2.075744234s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-500581 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:57:50.896462  278755 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:57:50.896699  278755 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:57:50.896707  278755 out.go:374] Setting ErrFile to fd 2...
	I1228 06:57:50.896712  278755 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:57:50.896968  278755 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:57:50.897247  278755 out.go:368] Setting JSON to false
	I1228 06:57:50.897264  278755 mustload.go:66] Loading cluster: default-k8s-diff-port-500581
	I1228 06:57:50.897581  278755 config.go:182] Loaded profile config "default-k8s-diff-port-500581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:57:50.897949  278755 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-500581 --format={{.State.Status}}
	I1228 06:57:50.915755  278755 host.go:66] Checking if "default-k8s-diff-port-500581" exists ...
	I1228 06:57:50.916020  278755 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:57:50.971972  278755 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:80 SystemTime:2025-12-28 06:57:50.961172919 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:57:50.973143  278755 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22351/minikube-v1.37.0-1766883634-22351-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766883634-22351/minikube-v1.37.0-1766883634-22351-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766883634-22351-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:default-k8s-diff-port-500581 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarni
ng:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1228 06:57:51.015404  278755 out.go:179] * Pausing node default-k8s-diff-port-500581 ... 
	I1228 06:57:51.091393  278755 host.go:66] Checking if "default-k8s-diff-port-500581" exists ...
	I1228 06:57:51.091762  278755 ssh_runner.go:195] Run: systemctl --version
	I1228 06:57:51.091818  278755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-500581
	I1228 06:57:51.116262  278755 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/default-k8s-diff-port-500581/id_rsa Username:docker}
	I1228 06:57:51.209483  278755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:57:51.222059  278755 pause.go:52] kubelet running: true
	I1228 06:57:51.222120  278755 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1228 06:57:51.393001  278755 ssh_runner.go:195] Run: sudo crio config
	I1228 06:57:51.449356  278755 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:57:51.464685  278755 retry.go:84] will retry after 200ms: list running: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:51Z" level=error msg="open /run/runc: no such file or directory"
	I1228 06:57:51.713198  278755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:57:51.726759  278755 pause.go:52] kubelet running: false
	I1228 06:57:51.726836  278755 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1228 06:57:51.871442  278755 ssh_runner.go:195] Run: sudo crio config
	I1228 06:57:51.925488  278755 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:57:52.223712  278755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:57:52.250784  278755 pause.go:52] kubelet running: false
	I1228 06:57:52.250848  278755 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1228 06:57:52.828122  278755 ssh_runner.go:195] Run: sudo crio config
	I1228 06:57:52.887578  278755 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:57:52.901900  278755 out.go:203] 
	W1228 06:57:52.903398  278755 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1228 06:57:52.903426  278755 out.go:285] * 
	* 
	W1228 06:57:52.905591  278755 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1228 06:57:52.907131  278755 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-500581 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-500581
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-500581:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "da0ad7d174162d65c66e3ecaafa24d0b1252ec7bd2985277aa585d02014d05db",
	        "Created": "2025-12-28T06:55:57.058727966Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 261768,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-28T06:56:51.554117942Z",
	            "FinishedAt": "2025-12-28T06:56:50.606681394Z"
	        },
	        "Image": "sha256:8b8cccb9afb2a57c3d011fcf33e0403b1551aa7036e30b12a395646869801935",
	        "ResolvConfPath": "/var/lib/docker/containers/da0ad7d174162d65c66e3ecaafa24d0b1252ec7bd2985277aa585d02014d05db/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/da0ad7d174162d65c66e3ecaafa24d0b1252ec7bd2985277aa585d02014d05db/hostname",
	        "HostsPath": "/var/lib/docker/containers/da0ad7d174162d65c66e3ecaafa24d0b1252ec7bd2985277aa585d02014d05db/hosts",
	        "LogPath": "/var/lib/docker/containers/da0ad7d174162d65c66e3ecaafa24d0b1252ec7bd2985277aa585d02014d05db/da0ad7d174162d65c66e3ecaafa24d0b1252ec7bd2985277aa585d02014d05db-json.log",
	        "Name": "/default-k8s-diff-port-500581",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-500581:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-500581",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "da0ad7d174162d65c66e3ecaafa24d0b1252ec7bd2985277aa585d02014d05db",
	                "LowerDir": "/var/lib/docker/overlay2/85103bb99adaaea3a94cf4ab6a896e25cc4e5dc2ccbdb18ec5bbe340080a52e1-init/diff:/var/lib/docker/overlay2/69e554713d6cc3cb33e7ea5f93430536a8ca0db38320574d3719c26f00b2f62c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/85103bb99adaaea3a94cf4ab6a896e25cc4e5dc2ccbdb18ec5bbe340080a52e1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/85103bb99adaaea3a94cf4ab6a896e25cc4e5dc2ccbdb18ec5bbe340080a52e1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/85103bb99adaaea3a94cf4ab6a896e25cc4e5dc2ccbdb18ec5bbe340080a52e1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-500581",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-500581/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-500581",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-500581",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-500581",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "f869c433331653babaa48dc1fe6829c0440e59db2e53d701411d3b78b258e6a1",
	            "SandboxKey": "/var/run/docker/netns/f869c4333316",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-500581": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "561fd4603b1e0bc4629e98f37fdc1fd471ed3bacfee2a3df062fc13a3b58944e",
	                    "EndpointID": "c08ecdbb34d90ac6e25ee648992fa752d3dcc2ae90558321ca6debc877436aa2",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "7a:03:11:63:66:8e",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-500581",
	                        "da0ad7d17416"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-500581 -n default-k8s-diff-port-500581
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-500581 -n default-k8s-diff-port-500581: exit status 2 (380.033422ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-500581 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-500581 logs -n 25: (1.141134756s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-694122                                                                                                                                                                                                                     │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ addons  │ enable dashboard -p embed-certs-422591 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ start   │ -p embed-certs-422591 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:57 UTC │
	│ delete  │ -p old-k8s-version-694122                                                                                                                                                                                                                     │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ start   │ -p newest-cni-479871 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:57 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-500581 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ start   │ -p default-k8s-diff-port-500581 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:57 UTC │
	│ image   │ no-preload-950460 image list --format=json                                                                                                                                                                                                    │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ pause   │ -p no-preload-950460 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-479871 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │                     │
	│ stop    │ -p newest-cni-479871 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ delete  │ -p no-preload-950460                                                                                                                                                                                                                          │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ delete  │ -p no-preload-950460                                                                                                                                                                                                                          │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ start   │ -p auto-610916 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-610916                  │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-479871 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ start   │ -p newest-cni-479871 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ image   │ newest-cni-479871 image list --format=json                                                                                                                                                                                                    │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ pause   │ -p newest-cni-479871 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │                     │
	│ delete  │ -p newest-cni-479871                                                                                                                                                                                                                          │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ delete  │ -p newest-cni-479871                                                                                                                                                                                                                          │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ start   │ -p kindnet-610916 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                      │ kindnet-610916               │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │                     │
	│ image   │ default-k8s-diff-port-500581 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ pause   │ -p default-k8s-diff-port-500581 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │                     │
	│ image   │ embed-certs-422591 image list --format=json                                                                                                                                                                                                   │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ pause   │ -p embed-certs-422591 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 06:57:48
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 06:57:48.092140  278228 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:57:48.092378  278228 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:57:48.092386  278228 out.go:374] Setting ErrFile to fd 2...
	I1228 06:57:48.092390  278228 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:57:48.092563  278228 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:57:48.093025  278228 out.go:368] Setting JSON to false
	I1228 06:57:48.094218  278228 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2420,"bootTime":1766902648,"procs":501,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1228 06:57:48.094271  278228 start.go:143] virtualization: kvm guest
	I1228 06:57:48.096346  278228 out.go:179] * [kindnet-610916] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1228 06:57:48.097696  278228 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 06:57:48.097711  278228 notify.go:221] Checking for updates...
	I1228 06:57:48.099961  278228 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 06:57:48.101206  278228 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:57:48.102372  278228 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-5550/.minikube
	I1228 06:57:48.103893  278228 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1228 06:57:48.105015  278228 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 06:57:48.106663  278228 config.go:182] Loaded profile config "auto-610916": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:57:48.106765  278228 config.go:182] Loaded profile config "default-k8s-diff-port-500581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:57:48.106861  278228 config.go:182] Loaded profile config "embed-certs-422591": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:57:48.106961  278228 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 06:57:48.131005  278228 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1228 06:57:48.131171  278228 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:57:48.189150  278228 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-28 06:57:48.178186648 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:57:48.189241  278228 docker.go:319] overlay module found
	I1228 06:57:48.191304  278228 out.go:179] * Using the docker driver based on user configuration
	I1228 06:57:48.193249  278228 start.go:309] selected driver: docker
	I1228 06:57:48.193265  278228 start.go:928] validating driver "docker" against <nil>
	I1228 06:57:48.193284  278228 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 06:57:48.193804  278228 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:57:48.254205  278228 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-28 06:57:48.244834861 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:57:48.254370  278228 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1228 06:57:48.254573  278228 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 06:57:48.255833  278228 out.go:179] * Using Docker driver with root privileges
	I1228 06:57:48.256951  278228 cni.go:84] Creating CNI manager for "kindnet"
	I1228 06:57:48.256968  278228 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1228 06:57:48.257020  278228 start.go:353] cluster config:
	{Name:kindnet-610916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:kindnet-610916 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:57:48.258226  278228 out.go:179] * Starting "kindnet-610916" primary control-plane node in "kindnet-610916" cluster
	I1228 06:57:48.259087  278228 cache.go:134] Beginning downloading kic base image for docker with crio
	I1228 06:57:48.260119  278228 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 06:57:48.261011  278228 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1228 06:57:48.261053  278228 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22352-5550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1228 06:57:48.261062  278228 cache.go:65] Caching tarball of preloaded images
	I1228 06:57:48.261099  278228 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 06:57:48.261144  278228 preload.go:251] Found /home/jenkins/minikube-integration/22352-5550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1228 06:57:48.261157  278228 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1228 06:57:48.261238  278228 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/kindnet-610916/config.json ...
	I1228 06:57:48.261266  278228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/kindnet-610916/config.json: {Name:mk0bc80a535dbef6153fe5637e5a21a1797ea2f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:57:48.282470  278228 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 06:57:48.282492  278228 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
	I1228 06:57:48.282511  278228 cache.go:243] Successfully downloaded all kic artifacts
	I1228 06:57:48.282540  278228 start.go:360] acquireMachinesLock for kindnet-610916: {Name:mk606eee79fd57ff798c0475285cd3fc5d0868a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:57:48.282648  278228 start.go:364] duration metric: took 90.358µs to acquireMachinesLock for "kindnet-610916"
	I1228 06:57:48.282689  278228 start.go:93] Provisioning new machine with config: &{Name:kindnet-610916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:kindnet-610916 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1228 06:57:48.282765  278228 start.go:125] createHost starting for "" (driver="docker")
	W1228 06:57:44.957551  270987 node_ready.go:57] node "auto-610916" has "Ready":"False" status (will retry)
	W1228 06:57:47.457801  270987 node_ready.go:57] node "auto-610916" has "Ready":"False" status (will retry)
	I1228 06:57:48.285477  278228 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1228 06:57:48.285727  278228 start.go:159] libmachine.API.Create for "kindnet-610916" (driver="docker")
	I1228 06:57:48.285762  278228 client.go:173] LocalClient.Create starting
	I1228 06:57:48.285841  278228 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem
	I1228 06:57:48.285884  278228 main.go:144] libmachine: Decoding PEM data...
	I1228 06:57:48.285904  278228 main.go:144] libmachine: Parsing certificate...
	I1228 06:57:48.285963  278228 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem
	I1228 06:57:48.285990  278228 main.go:144] libmachine: Decoding PEM data...
	I1228 06:57:48.286003  278228 main.go:144] libmachine: Parsing certificate...
	I1228 06:57:48.286341  278228 cli_runner.go:164] Run: docker network inspect kindnet-610916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1228 06:57:48.302864  278228 cli_runner.go:211] docker network inspect kindnet-610916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1228 06:57:48.302927  278228 network_create.go:284] running [docker network inspect kindnet-610916] to gather additional debugging logs...
	I1228 06:57:48.302943  278228 cli_runner.go:164] Run: docker network inspect kindnet-610916
	W1228 06:57:48.319307  278228 cli_runner.go:211] docker network inspect kindnet-610916 returned with exit code 1
	I1228 06:57:48.319353  278228 network_create.go:287] error running [docker network inspect kindnet-610916]: docker network inspect kindnet-610916: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-610916 not found
	I1228 06:57:48.319372  278228 network_create.go:289] output of [docker network inspect kindnet-610916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-610916 not found
	
	** /stderr **
	I1228 06:57:48.319446  278228 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 06:57:48.336850  278228 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-83d3c063481b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ca:56:51:df:60:88} reservation:<nil>}
	I1228 06:57:48.337829  278228 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-94477def059b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:5a:82:84:46:ba:6c} reservation:<nil>}
	I1228 06:57:48.338770  278228 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-76f4b09d664b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9a:e7:39:af:62:68} reservation:<nil>}
	I1228 06:57:48.339438  278228 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4435fbd1d5af IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:56:c5:3b:23:f3:bc} reservation:<nil>}
	I1228 06:57:48.340434  278228 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eb5ad0}
	I1228 06:57:48.340461  278228 network_create.go:124] attempt to create docker network kindnet-610916 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1228 06:57:48.340506  278228 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-610916 kindnet-610916
	I1228 06:57:48.395723  278228 network_create.go:108] docker network kindnet-610916 192.168.85.0/24 created
	I1228 06:57:48.395757  278228 kic.go:121] calculated static IP "192.168.85.2" for the "kindnet-610916" container
	I1228 06:57:48.395824  278228 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1228 06:57:48.417714  278228 cli_runner.go:164] Run: docker volume create kindnet-610916 --label name.minikube.sigs.k8s.io=kindnet-610916 --label created_by.minikube.sigs.k8s.io=true
	I1228 06:57:48.439257  278228 oci.go:103] Successfully created a docker volume kindnet-610916
	I1228 06:57:48.439339  278228 cli_runner.go:164] Run: docker run --rm --name kindnet-610916-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-610916 --entrypoint /usr/bin/test -v kindnet-610916:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -d /var/lib
	I1228 06:57:48.836799  278228 oci.go:107] Successfully prepared a docker volume kindnet-610916
	I1228 06:57:48.836882  278228 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1228 06:57:48.836910  278228 kic.go:194] Starting extracting preloaded images to volume ...
	I1228 06:57:48.836980  278228 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22352-5550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-610916:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1228 06:57:52.712727  278228 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22352-5550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-610916:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.875692449s)
	I1228 06:57:52.712770  278228 kic.go:203] duration metric: took 3.875856935s to extract preloaded images to volume ...
	W1228 06:57:52.712882  278228 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1228 06:57:52.712922  278228 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1228 06:57:52.712974  278228 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1228 06:57:52.777238  278228 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-610916 --name kindnet-610916 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-610916 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-610916 --network kindnet-610916 --ip 192.168.85.2 --volume kindnet-610916:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1
	I1228 06:57:53.076304  278228 cli_runner.go:164] Run: docker container inspect kindnet-610916 --format={{.State.Running}}
	
	
	==> CRI-O <==
	Dec 28 06:57:22 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:22.349850923Z" level=info msg="Started container" PID=1780 containerID=8bee676b704e20ba8a092af2abd07537c85a7804a5673f91fc565fb6f729b4a2 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zzbsp/dashboard-metrics-scraper id=330539c3-e275-4757-b614-f009d7744af7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fbde486ad15338762ba9a288d5a2545c7393a2d67330fe4e1ad78feebe88c816
	Dec 28 06:57:23 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:23.315363283Z" level=info msg="Removing container: 046f06947b70a46f2a809a32960d69f64813c32d038933ec7f376265e69983ce" id=8918bc52-abbb-41a3-bb98-e4583fb8b41c name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 28 06:57:23 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:23.410640201Z" level=info msg="Removed container 046f06947b70a46f2a809a32960d69f64813c32d038933ec7f376265e69983ce: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zzbsp/dashboard-metrics-scraper" id=8918bc52-abbb-41a3-bb98-e4583fb8b41c name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 28 06:57:32 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:32.339985802Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=20616013-6a0b-44b5-bede-5f07b76eeffd name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:57:32 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:32.340926356Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9522e047-9507-4366-a3d5-1eb9fd5bbf3a name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:57:32 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:32.341898546Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=5f1c3e5b-26ae-4a53-b073-b312041023df name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:57:32 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:32.342084954Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:32 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:32.3464709Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:32 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:32.346672399Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/550498a158471ee73ada4834d4edecbb9d33680c96b4418816df8005cac0e5e7/merged/etc/passwd: no such file or directory"
	Dec 28 06:57:32 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:32.346707817Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/550498a158471ee73ada4834d4edecbb9d33680c96b4418816df8005cac0e5e7/merged/etc/group: no such file or directory"
	Dec 28 06:57:32 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:32.347039337Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:32 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:32.38715033Z" level=info msg="Created container 31e15830afb2b2cc0ed2e73f6aa2ebe31e0cb15c7ec5035b91533ff8f0b1c9a2: kube-system/storage-provisioner/storage-provisioner" id=5f1c3e5b-26ae-4a53-b073-b312041023df name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:57:32 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:32.387861083Z" level=info msg="Starting container: 31e15830afb2b2cc0ed2e73f6aa2ebe31e0cb15c7ec5035b91533ff8f0b1c9a2" id=47039828-bf3e-4a30-b182-f4c4b4c006b9 name=/runtime.v1.RuntimeService/StartContainer
	Dec 28 06:57:32 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:32.389828698Z" level=info msg="Started container" PID=1795 containerID=31e15830afb2b2cc0ed2e73f6aa2ebe31e0cb15c7ec5035b91533ff8f0b1c9a2 description=kube-system/storage-provisioner/storage-provisioner id=47039828-bf3e-4a30-b182-f4c4b4c006b9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=149a169dd8a9dabe3b27ae71858820d7c08f9bb847060e551127e2aebb0349df
	Dec 28 06:57:48 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:48.171615128Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e3eefce2-62a3-46df-946c-7e591107ad03 name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:57:48 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:48.17257462Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=48580712-9929-4909-a77f-2883cb15357e name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:57:48 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:48.173703845Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zzbsp/dashboard-metrics-scraper" id=c8f8eac5-9171-41d6-8440-79fa764a9490 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:57:48 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:48.173894013Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:48 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:48.180891309Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:48 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:48.181494404Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:48 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:48.212198529Z" level=info msg="Created container 7673bd8aaca68e0213430ba3209522465063644a7d1f618f59fd74d8cedc116c: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zzbsp/dashboard-metrics-scraper" id=c8f8eac5-9171-41d6-8440-79fa764a9490 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:57:48 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:48.213123218Z" level=info msg="Starting container: 7673bd8aaca68e0213430ba3209522465063644a7d1f618f59fd74d8cedc116c" id=987587d0-0f7d-45ed-bb81-81708b19146a name=/runtime.v1.RuntimeService/StartContainer
	Dec 28 06:57:48 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:48.21599388Z" level=info msg="Started container" PID=1833 containerID=7673bd8aaca68e0213430ba3209522465063644a7d1f618f59fd74d8cedc116c description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zzbsp/dashboard-metrics-scraper id=987587d0-0f7d-45ed-bb81-81708b19146a name=/runtime.v1.RuntimeService/StartContainer sandboxID=fbde486ad15338762ba9a288d5a2545c7393a2d67330fe4e1ad78feebe88c816
	Dec 28 06:57:48 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:48.384522205Z" level=info msg="Removing container: 8bee676b704e20ba8a092af2abd07537c85a7804a5673f91fc565fb6f729b4a2" id=3c892ecb-1868-4b93-8ac0-02ba30301389 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 28 06:57:48 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:48.394730097Z" level=info msg="Removed container 8bee676b704e20ba8a092af2abd07537c85a7804a5673f91fc565fb6f729b4a2: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zzbsp/dashboard-metrics-scraper" id=3c892ecb-1868-4b93-8ac0-02ba30301389 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	7673bd8aaca68       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           6 seconds ago       Exited              dashboard-metrics-scraper   3                   fbde486ad1533       dashboard-metrics-scraper-867fb5f87b-zzbsp             kubernetes-dashboard
	31e15830afb2b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         2                   149a169dd8a9d       storage-provisioner                                    kube-system
	f18d9b0bf84a7       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   45 seconds ago      Running             kubernetes-dashboard        0                   12b437130f70c       kubernetes-dashboard-b84665fb8-cl9z8                   kubernetes-dashboard
	d719aa8243b3a       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   f125f594dd373       busybox                                                default
	f733ba35663a8       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           52 seconds ago      Running             coredns                     1                   8bcd5d7f5a8a9       coredns-7d764666f9-9glh9                               kube-system
	1f3d8e928f780       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                           52 seconds ago      Running             kube-proxy                  1                   281a2f8b82e55       kube-proxy-95gmh                                       kube-system
	53d6f0d3be69c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         1                   149a169dd8a9d       storage-provisioner                                    kube-system
	967268c9366fc       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           52 seconds ago      Running             kindnet-cni                 1                   2c680ac066c91       kindnet-lsrww                                          kube-system
	860ee028cf3be       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           55 seconds ago      Running             etcd                        1                   5e9bb71665183       etcd-default-k8s-diff-port-500581                      kube-system
	75829447ed214       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                           55 seconds ago      Running             kube-scheduler              1                   621ead8d7d541       kube-scheduler-default-k8s-diff-port-500581            kube-system
	6a99f52e2e305       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                           55 seconds ago      Running             kube-apiserver              1                   1b54ef99af8eb       kube-apiserver-default-k8s-diff-port-500581            kube-system
	e0497769c0893       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                           55 seconds ago      Running             kube-controller-manager     1                   6d2ee4cfc0c0e       kube-controller-manager-default-k8s-diff-port-500581   kube-system
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-500581
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-500581
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba
	                    minikube.k8s.io/name=default-k8s-diff-port-500581
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_28T06_56_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Dec 2025 06:56:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-500581
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 28 Dec 2025 06:57:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 28 Dec 2025 06:57:41 +0000   Sun, 28 Dec 2025 06:56:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 28 Dec 2025 06:57:41 +0000   Sun, 28 Dec 2025 06:56:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 28 Dec 2025 06:57:41 +0000   Sun, 28 Dec 2025 06:56:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 28 Dec 2025 06:57:41 +0000   Sun, 28 Dec 2025 06:56:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-500581
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 493159aea3d8b8768b108b926950835d
	  System UUID:                b3bebd8a-2cf1-4ff4-9600-b6e76b191bd7
	  Boot ID:                    e7a1d175-ccf2-4135-b9c7-3a9f70f4c4af
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 coredns-7d764666f9-9glh9                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     101s
	  kube-system                 etcd-default-k8s-diff-port-500581                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         106s
	  kube-system                 kindnet-lsrww                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      101s
	  kube-system                 kube-apiserver-default-k8s-diff-port-500581             250m (3%)     0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-500581    200m (2%)     0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-proxy-95gmh                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-scheduler-default-k8s-diff-port-500581             100m (1%)     0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-zzbsp              0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-cl9z8                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  102s  node-controller  Node default-k8s-diff-port-500581 event: Registered Node default-k8s-diff-port-500581 in Controller
	  Normal  RegisteredNode  51s   node-controller  Node default-k8s-diff-port-500581 event: Registered Node default-k8s-diff-port-500581 in Controller
	
	
	==> dmesg <==
	[Dec28 06:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001811] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000998] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.386099] i8042: Warning: Keylock active
	[  +0.010472] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.485785] block sda: the capability attribute has been deprecated.
	[  +0.082391] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024584] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.071522] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 06:57:54 up 40 min,  0 user,  load average: 4.68, 3.30, 2.02
	Linux default-k8s-diff-port-500581 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 28 06:57:13 default-k8s-diff-port-500581 kubelet[733]: E1228 06:57:13.285978     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-zzbsp_kubernetes-dashboard(e431e76d-8f1b-4a30-b6bf-d0523cc61695)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zzbsp" podUID="e431e76d-8f1b-4a30-b6bf-d0523cc61695"
	Dec 28 06:57:14 default-k8s-diff-port-500581 kubelet[733]: E1228 06:57:14.084141     733 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-default-k8s-diff-port-500581" containerName="etcd"
	Dec 28 06:57:14 default-k8s-diff-port-500581 kubelet[733]: E1228 06:57:14.288347     733 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-default-k8s-diff-port-500581" containerName="etcd"
	Dec 28 06:57:17 default-k8s-diff-port-500581 kubelet[733]: E1228 06:57:17.911972     733 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-default-k8s-diff-port-500581" containerName="kube-controller-manager"
	Dec 28 06:57:22 default-k8s-diff-port-500581 kubelet[733]: E1228 06:57:22.036276     733 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zzbsp" containerName="dashboard-metrics-scraper"
	Dec 28 06:57:22 default-k8s-diff-port-500581 kubelet[733]: I1228 06:57:22.036320     733 scope.go:122] "RemoveContainer" containerID="046f06947b70a46f2a809a32960d69f64813c32d038933ec7f376265e69983ce"
	Dec 28 06:57:23 default-k8s-diff-port-500581 kubelet[733]: I1228 06:57:23.313977     733 scope.go:122] "RemoveContainer" containerID="046f06947b70a46f2a809a32960d69f64813c32d038933ec7f376265e69983ce"
	Dec 28 06:57:23 default-k8s-diff-port-500581 kubelet[733]: E1228 06:57:23.314294     733 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zzbsp" containerName="dashboard-metrics-scraper"
	Dec 28 06:57:23 default-k8s-diff-port-500581 kubelet[733]: I1228 06:57:23.314333     733 scope.go:122] "RemoveContainer" containerID="8bee676b704e20ba8a092af2abd07537c85a7804a5673f91fc565fb6f729b4a2"
	Dec 28 06:57:23 default-k8s-diff-port-500581 kubelet[733]: E1228 06:57:23.314547     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-zzbsp_kubernetes-dashboard(e431e76d-8f1b-4a30-b6bf-d0523cc61695)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zzbsp" podUID="e431e76d-8f1b-4a30-b6bf-d0523cc61695"
	Dec 28 06:57:32 default-k8s-diff-port-500581 kubelet[733]: E1228 06:57:32.035828     733 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zzbsp" containerName="dashboard-metrics-scraper"
	Dec 28 06:57:32 default-k8s-diff-port-500581 kubelet[733]: I1228 06:57:32.035871     733 scope.go:122] "RemoveContainer" containerID="8bee676b704e20ba8a092af2abd07537c85a7804a5673f91fc565fb6f729b4a2"
	Dec 28 06:57:32 default-k8s-diff-port-500581 kubelet[733]: E1228 06:57:32.036054     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-zzbsp_kubernetes-dashboard(e431e76d-8f1b-4a30-b6bf-d0523cc61695)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zzbsp" podUID="e431e76d-8f1b-4a30-b6bf-d0523cc61695"
	Dec 28 06:57:32 default-k8s-diff-port-500581 kubelet[733]: I1228 06:57:32.339542     733 scope.go:122] "RemoveContainer" containerID="53d6f0d3be69c5bdda18486faff57085a98fc06686e0ab96dababe5245118f65"
	Dec 28 06:57:37 default-k8s-diff-port-500581 kubelet[733]: E1228 06:57:37.334845     733 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-9glh9" containerName="coredns"
	Dec 28 06:57:48 default-k8s-diff-port-500581 kubelet[733]: E1228 06:57:48.171052     733 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zzbsp" containerName="dashboard-metrics-scraper"
	Dec 28 06:57:48 default-k8s-diff-port-500581 kubelet[733]: I1228 06:57:48.171113     733 scope.go:122] "RemoveContainer" containerID="8bee676b704e20ba8a092af2abd07537c85a7804a5673f91fc565fb6f729b4a2"
	Dec 28 06:57:48 default-k8s-diff-port-500581 kubelet[733]: I1228 06:57:48.382023     733 scope.go:122] "RemoveContainer" containerID="8bee676b704e20ba8a092af2abd07537c85a7804a5673f91fc565fb6f729b4a2"
	Dec 28 06:57:48 default-k8s-diff-port-500581 kubelet[733]: E1228 06:57:48.382329     733 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zzbsp" containerName="dashboard-metrics-scraper"
	Dec 28 06:57:48 default-k8s-diff-port-500581 kubelet[733]: I1228 06:57:48.382364     733 scope.go:122] "RemoveContainer" containerID="7673bd8aaca68e0213430ba3209522465063644a7d1f618f59fd74d8cedc116c"
	Dec 28 06:57:48 default-k8s-diff-port-500581 kubelet[733]: E1228 06:57:48.382562     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-zzbsp_kubernetes-dashboard(e431e76d-8f1b-4a30-b6bf-d0523cc61695)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zzbsp" podUID="e431e76d-8f1b-4a30-b6bf-d0523cc61695"
	Dec 28 06:57:51 default-k8s-diff-port-500581 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 28 06:57:51 default-k8s-diff-port-500581 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 28 06:57:51 default-k8s-diff-port-500581 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 06:57:51 default-k8s-diff-port-500581 systemd[1]: kubelet.service: Consumed 1.782s CPU time.
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1228 06:57:53.705596  279708 logs.go:279] Failed to list containers for "kube-apiserver": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:53Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:53.775879  279708 logs.go:279] Failed to list containers for "etcd": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:53Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:53.853262  279708 logs.go:279] Failed to list containers for "coredns": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:53Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:53.919368  279708 logs.go:279] Failed to list containers for "kube-scheduler": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:53Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:53.990261  279708 logs.go:279] Failed to list containers for "kube-proxy": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:53Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:54.059861  279708 logs.go:279] Failed to list containers for "kube-controller-manager": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:54Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:54.123396  279708 logs.go:279] Failed to list containers for "kindnet": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:54Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:54.187976  279708 logs.go:279] Failed to list containers for "kubernetes-dashboard": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:54Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:54.253744  279708 logs.go:279] Failed to list containers for "storage-provisioner": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:54Z" level=error msg="open /run/runc: no such file or directory"

                                                
                                                
** /stderr **
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-500581 -n default-k8s-diff-port-500581
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-500581 -n default-k8s-diff-port-500581: exit status 2 (343.593278ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-500581 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-500581
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-500581:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "da0ad7d174162d65c66e3ecaafa24d0b1252ec7bd2985277aa585d02014d05db",
	        "Created": "2025-12-28T06:55:57.058727966Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 261768,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-28T06:56:51.554117942Z",
	            "FinishedAt": "2025-12-28T06:56:50.606681394Z"
	        },
	        "Image": "sha256:8b8cccb9afb2a57c3d011fcf33e0403b1551aa7036e30b12a395646869801935",
	        "ResolvConfPath": "/var/lib/docker/containers/da0ad7d174162d65c66e3ecaafa24d0b1252ec7bd2985277aa585d02014d05db/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/da0ad7d174162d65c66e3ecaafa24d0b1252ec7bd2985277aa585d02014d05db/hostname",
	        "HostsPath": "/var/lib/docker/containers/da0ad7d174162d65c66e3ecaafa24d0b1252ec7bd2985277aa585d02014d05db/hosts",
	        "LogPath": "/var/lib/docker/containers/da0ad7d174162d65c66e3ecaafa24d0b1252ec7bd2985277aa585d02014d05db/da0ad7d174162d65c66e3ecaafa24d0b1252ec7bd2985277aa585d02014d05db-json.log",
	        "Name": "/default-k8s-diff-port-500581",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-500581:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-500581",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "da0ad7d174162d65c66e3ecaafa24d0b1252ec7bd2985277aa585d02014d05db",
	                "LowerDir": "/var/lib/docker/overlay2/85103bb99adaaea3a94cf4ab6a896e25cc4e5dc2ccbdb18ec5bbe340080a52e1-init/diff:/var/lib/docker/overlay2/69e554713d6cc3cb33e7ea5f93430536a8ca0db38320574d3719c26f00b2f62c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/85103bb99adaaea3a94cf4ab6a896e25cc4e5dc2ccbdb18ec5bbe340080a52e1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/85103bb99adaaea3a94cf4ab6a896e25cc4e5dc2ccbdb18ec5bbe340080a52e1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/85103bb99adaaea3a94cf4ab6a896e25cc4e5dc2ccbdb18ec5bbe340080a52e1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-500581",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-500581/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-500581",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-500581",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-500581",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "f869c433331653babaa48dc1fe6829c0440e59db2e53d701411d3b78b258e6a1",
	            "SandboxKey": "/var/run/docker/netns/f869c4333316",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-500581": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "561fd4603b1e0bc4629e98f37fdc1fd471ed3bacfee2a3df062fc13a3b58944e",
	                    "EndpointID": "c08ecdbb34d90ac6e25ee648992fa752d3dcc2ae90558321ca6debc877436aa2",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "7a:03:11:63:66:8e",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-500581",
	                        "da0ad7d17416"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-500581 -n default-k8s-diff-port-500581
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-500581 -n default-k8s-diff-port-500581: exit status 2 (356.28688ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-500581 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-500581 logs -n 25: (1.114006809s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-694122                                                                                                                                                                                                                     │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ addons  │ enable dashboard -p embed-certs-422591 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ start   │ -p embed-certs-422591 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:57 UTC │
	│ delete  │ -p old-k8s-version-694122                                                                                                                                                                                                                     │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ start   │ -p newest-cni-479871 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:57 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-500581 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ start   │ -p default-k8s-diff-port-500581 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:57 UTC │
	│ image   │ no-preload-950460 image list --format=json                                                                                                                                                                                                    │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ pause   │ -p no-preload-950460 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-479871 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │                     │
	│ stop    │ -p newest-cni-479871 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ delete  │ -p no-preload-950460                                                                                                                                                                                                                          │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ delete  │ -p no-preload-950460                                                                                                                                                                                                                          │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ start   │ -p auto-610916 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-610916                  │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-479871 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ start   │ -p newest-cni-479871 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ image   │ newest-cni-479871 image list --format=json                                                                                                                                                                                                    │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ pause   │ -p newest-cni-479871 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │                     │
	│ delete  │ -p newest-cni-479871                                                                                                                                                                                                                          │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ delete  │ -p newest-cni-479871                                                                                                                                                                                                                          │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ start   │ -p kindnet-610916 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                      │ kindnet-610916               │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │                     │
	│ image   │ default-k8s-diff-port-500581 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ pause   │ -p default-k8s-diff-port-500581 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │                     │
	│ image   │ embed-certs-422591 image list --format=json                                                                                                                                                                                                   │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ pause   │ -p embed-certs-422591 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 06:57:48
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 06:57:48.092140  278228 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:57:48.092378  278228 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:57:48.092386  278228 out.go:374] Setting ErrFile to fd 2...
	I1228 06:57:48.092390  278228 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:57:48.092563  278228 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:57:48.093025  278228 out.go:368] Setting JSON to false
	I1228 06:57:48.094218  278228 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2420,"bootTime":1766902648,"procs":501,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1228 06:57:48.094271  278228 start.go:143] virtualization: kvm guest
	I1228 06:57:48.096346  278228 out.go:179] * [kindnet-610916] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1228 06:57:48.097696  278228 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 06:57:48.097711  278228 notify.go:221] Checking for updates...
	I1228 06:57:48.099961  278228 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 06:57:48.101206  278228 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:57:48.102372  278228 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-5550/.minikube
	I1228 06:57:48.103893  278228 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1228 06:57:48.105015  278228 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 06:57:48.106663  278228 config.go:182] Loaded profile config "auto-610916": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:57:48.106765  278228 config.go:182] Loaded profile config "default-k8s-diff-port-500581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:57:48.106861  278228 config.go:182] Loaded profile config "embed-certs-422591": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:57:48.106961  278228 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 06:57:48.131005  278228 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1228 06:57:48.131171  278228 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:57:48.189150  278228 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-28 06:57:48.178186648 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:57:48.189241  278228 docker.go:319] overlay module found
	I1228 06:57:48.191304  278228 out.go:179] * Using the docker driver based on user configuration
	I1228 06:57:48.193249  278228 start.go:309] selected driver: docker
	I1228 06:57:48.193265  278228 start.go:928] validating driver "docker" against <nil>
	I1228 06:57:48.193284  278228 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 06:57:48.193804  278228 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:57:48.254205  278228 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-28 06:57:48.244834861 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:57:48.254370  278228 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1228 06:57:48.254573  278228 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 06:57:48.255833  278228 out.go:179] * Using Docker driver with root privileges
	I1228 06:57:48.256951  278228 cni.go:84] Creating CNI manager for "kindnet"
	I1228 06:57:48.256968  278228 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1228 06:57:48.257020  278228 start.go:353] cluster config:
	{Name:kindnet-610916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:kindnet-610916 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:57:48.258226  278228 out.go:179] * Starting "kindnet-610916" primary control-plane node in "kindnet-610916" cluster
	I1228 06:57:48.259087  278228 cache.go:134] Beginning downloading kic base image for docker with crio
	I1228 06:57:48.260119  278228 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 06:57:48.261011  278228 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1228 06:57:48.261053  278228 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22352-5550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1228 06:57:48.261062  278228 cache.go:65] Caching tarball of preloaded images
	I1228 06:57:48.261099  278228 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 06:57:48.261144  278228 preload.go:251] Found /home/jenkins/minikube-integration/22352-5550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1228 06:57:48.261157  278228 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1228 06:57:48.261238  278228 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/kindnet-610916/config.json ...
	I1228 06:57:48.261266  278228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/kindnet-610916/config.json: {Name:mk0bc80a535dbef6153fe5637e5a21a1797ea2f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:57:48.282470  278228 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 06:57:48.282492  278228 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
	I1228 06:57:48.282511  278228 cache.go:243] Successfully downloaded all kic artifacts
	I1228 06:57:48.282540  278228 start.go:360] acquireMachinesLock for kindnet-610916: {Name:mk606eee79fd57ff798c0475285cd3fc5d0868a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:57:48.282648  278228 start.go:364] duration metric: took 90.358µs to acquireMachinesLock for "kindnet-610916"
	I1228 06:57:48.282689  278228 start.go:93] Provisioning new machine with config: &{Name:kindnet-610916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:kindnet-610916 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1228 06:57:48.282765  278228 start.go:125] createHost starting for "" (driver="docker")
	W1228 06:57:44.957551  270987 node_ready.go:57] node "auto-610916" has "Ready":"False" status (will retry)
	W1228 06:57:47.457801  270987 node_ready.go:57] node "auto-610916" has "Ready":"False" status (will retry)
	I1228 06:57:48.285477  278228 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1228 06:57:48.285727  278228 start.go:159] libmachine.API.Create for "kindnet-610916" (driver="docker")
	I1228 06:57:48.285762  278228 client.go:173] LocalClient.Create starting
	I1228 06:57:48.285841  278228 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem
	I1228 06:57:48.285884  278228 main.go:144] libmachine: Decoding PEM data...
	I1228 06:57:48.285904  278228 main.go:144] libmachine: Parsing certificate...
	I1228 06:57:48.285963  278228 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem
	I1228 06:57:48.285990  278228 main.go:144] libmachine: Decoding PEM data...
	I1228 06:57:48.286003  278228 main.go:144] libmachine: Parsing certificate...
	I1228 06:57:48.286341  278228 cli_runner.go:164] Run: docker network inspect kindnet-610916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1228 06:57:48.302864  278228 cli_runner.go:211] docker network inspect kindnet-610916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1228 06:57:48.302927  278228 network_create.go:284] running [docker network inspect kindnet-610916] to gather additional debugging logs...
	I1228 06:57:48.302943  278228 cli_runner.go:164] Run: docker network inspect kindnet-610916
	W1228 06:57:48.319307  278228 cli_runner.go:211] docker network inspect kindnet-610916 returned with exit code 1
	I1228 06:57:48.319353  278228 network_create.go:287] error running [docker network inspect kindnet-610916]: docker network inspect kindnet-610916: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-610916 not found
	I1228 06:57:48.319372  278228 network_create.go:289] output of [docker network inspect kindnet-610916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-610916 not found
	
	** /stderr **
	I1228 06:57:48.319446  278228 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 06:57:48.336850  278228 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-83d3c063481b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ca:56:51:df:60:88} reservation:<nil>}
	I1228 06:57:48.337829  278228 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-94477def059b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:5a:82:84:46:ba:6c} reservation:<nil>}
	I1228 06:57:48.338770  278228 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-76f4b09d664b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9a:e7:39:af:62:68} reservation:<nil>}
	I1228 06:57:48.339438  278228 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4435fbd1d5af IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:56:c5:3b:23:f3:bc} reservation:<nil>}
	I1228 06:57:48.340434  278228 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eb5ad0}
	I1228 06:57:48.340461  278228 network_create.go:124] attempt to create docker network kindnet-610916 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1228 06:57:48.340506  278228 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-610916 kindnet-610916
	I1228 06:57:48.395723  278228 network_create.go:108] docker network kindnet-610916 192.168.85.0/24 created
	I1228 06:57:48.395757  278228 kic.go:121] calculated static IP "192.168.85.2" for the "kindnet-610916" container
	I1228 06:57:48.395824  278228 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1228 06:57:48.417714  278228 cli_runner.go:164] Run: docker volume create kindnet-610916 --label name.minikube.sigs.k8s.io=kindnet-610916 --label created_by.minikube.sigs.k8s.io=true
	I1228 06:57:48.439257  278228 oci.go:103] Successfully created a docker volume kindnet-610916
	I1228 06:57:48.439339  278228 cli_runner.go:164] Run: docker run --rm --name kindnet-610916-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-610916 --entrypoint /usr/bin/test -v kindnet-610916:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -d /var/lib
	I1228 06:57:48.836799  278228 oci.go:107] Successfully prepared a docker volume kindnet-610916
	I1228 06:57:48.836882  278228 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1228 06:57:48.836910  278228 kic.go:194] Starting extracting preloaded images to volume ...
	I1228 06:57:48.836980  278228 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22352-5550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-610916:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1228 06:57:52.712727  278228 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22352-5550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-610916:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.875692449s)
	I1228 06:57:52.712770  278228 kic.go:203] duration metric: took 3.875856935s to extract preloaded images to volume ...
	W1228 06:57:52.712882  278228 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1228 06:57:52.712922  278228 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1228 06:57:52.712974  278228 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1228 06:57:52.777238  278228 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-610916 --name kindnet-610916 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-610916 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-610916 --network kindnet-610916 --ip 192.168.85.2 --volume kindnet-610916:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1
	I1228 06:57:53.076304  278228 cli_runner.go:164] Run: docker container inspect kindnet-610916 --format={{.State.Running}}
	W1228 06:57:49.458420  270987 node_ready.go:57] node "auto-610916" has "Ready":"False" status (will retry)
	W1228 06:57:51.957779  270987 node_ready.go:57] node "auto-610916" has "Ready":"False" status (will retry)
	W1228 06:57:53.957965  270987 node_ready.go:57] node "auto-610916" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Dec 28 06:57:22 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:22.349850923Z" level=info msg="Started container" PID=1780 containerID=8bee676b704e20ba8a092af2abd07537c85a7804a5673f91fc565fb6f729b4a2 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zzbsp/dashboard-metrics-scraper id=330539c3-e275-4757-b614-f009d7744af7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fbde486ad15338762ba9a288d5a2545c7393a2d67330fe4e1ad78feebe88c816
	Dec 28 06:57:23 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:23.315363283Z" level=info msg="Removing container: 046f06947b70a46f2a809a32960d69f64813c32d038933ec7f376265e69983ce" id=8918bc52-abbb-41a3-bb98-e4583fb8b41c name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 28 06:57:23 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:23.410640201Z" level=info msg="Removed container 046f06947b70a46f2a809a32960d69f64813c32d038933ec7f376265e69983ce: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zzbsp/dashboard-metrics-scraper" id=8918bc52-abbb-41a3-bb98-e4583fb8b41c name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 28 06:57:32 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:32.339985802Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=20616013-6a0b-44b5-bede-5f07b76eeffd name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:57:32 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:32.340926356Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9522e047-9507-4366-a3d5-1eb9fd5bbf3a name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:57:32 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:32.341898546Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=5f1c3e5b-26ae-4a53-b073-b312041023df name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:57:32 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:32.342084954Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:32 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:32.3464709Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:32 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:32.346672399Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/550498a158471ee73ada4834d4edecbb9d33680c96b4418816df8005cac0e5e7/merged/etc/passwd: no such file or directory"
	Dec 28 06:57:32 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:32.346707817Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/550498a158471ee73ada4834d4edecbb9d33680c96b4418816df8005cac0e5e7/merged/etc/group: no such file or directory"
	Dec 28 06:57:32 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:32.347039337Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:32 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:32.38715033Z" level=info msg="Created container 31e15830afb2b2cc0ed2e73f6aa2ebe31e0cb15c7ec5035b91533ff8f0b1c9a2: kube-system/storage-provisioner/storage-provisioner" id=5f1c3e5b-26ae-4a53-b073-b312041023df name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:57:32 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:32.387861083Z" level=info msg="Starting container: 31e15830afb2b2cc0ed2e73f6aa2ebe31e0cb15c7ec5035b91533ff8f0b1c9a2" id=47039828-bf3e-4a30-b182-f4c4b4c006b9 name=/runtime.v1.RuntimeService/StartContainer
	Dec 28 06:57:32 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:32.389828698Z" level=info msg="Started container" PID=1795 containerID=31e15830afb2b2cc0ed2e73f6aa2ebe31e0cb15c7ec5035b91533ff8f0b1c9a2 description=kube-system/storage-provisioner/storage-provisioner id=47039828-bf3e-4a30-b182-f4c4b4c006b9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=149a169dd8a9dabe3b27ae71858820d7c08f9bb847060e551127e2aebb0349df
	Dec 28 06:57:48 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:48.171615128Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e3eefce2-62a3-46df-946c-7e591107ad03 name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:57:48 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:48.17257462Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=48580712-9929-4909-a77f-2883cb15357e name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:57:48 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:48.173703845Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zzbsp/dashboard-metrics-scraper" id=c8f8eac5-9171-41d6-8440-79fa764a9490 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:57:48 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:48.173894013Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:48 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:48.180891309Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:48 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:48.181494404Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:48 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:48.212198529Z" level=info msg="Created container 7673bd8aaca68e0213430ba3209522465063644a7d1f618f59fd74d8cedc116c: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zzbsp/dashboard-metrics-scraper" id=c8f8eac5-9171-41d6-8440-79fa764a9490 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:57:48 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:48.213123218Z" level=info msg="Starting container: 7673bd8aaca68e0213430ba3209522465063644a7d1f618f59fd74d8cedc116c" id=987587d0-0f7d-45ed-bb81-81708b19146a name=/runtime.v1.RuntimeService/StartContainer
	Dec 28 06:57:48 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:48.21599388Z" level=info msg="Started container" PID=1833 containerID=7673bd8aaca68e0213430ba3209522465063644a7d1f618f59fd74d8cedc116c description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zzbsp/dashboard-metrics-scraper id=987587d0-0f7d-45ed-bb81-81708b19146a name=/runtime.v1.RuntimeService/StartContainer sandboxID=fbde486ad15338762ba9a288d5a2545c7393a2d67330fe4e1ad78feebe88c816
	Dec 28 06:57:48 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:48.384522205Z" level=info msg="Removing container: 8bee676b704e20ba8a092af2abd07537c85a7804a5673f91fc565fb6f729b4a2" id=3c892ecb-1868-4b93-8ac0-02ba30301389 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 28 06:57:48 default-k8s-diff-port-500581 crio[576]: time="2025-12-28T06:57:48.394730097Z" level=info msg="Removed container 8bee676b704e20ba8a092af2abd07537c85a7804a5673f91fc565fb6f729b4a2: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zzbsp/dashboard-metrics-scraper" id=3c892ecb-1868-4b93-8ac0-02ba30301389 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	7673bd8aaca68       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           8 seconds ago       Exited              dashboard-metrics-scraper   3                   fbde486ad1533       dashboard-metrics-scraper-867fb5f87b-zzbsp             kubernetes-dashboard
	31e15830afb2b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         2                   149a169dd8a9d       storage-provisioner                                    kube-system
	f18d9b0bf84a7       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   47 seconds ago      Running             kubernetes-dashboard        0                   12b437130f70c       kubernetes-dashboard-b84665fb8-cl9z8                   kubernetes-dashboard
	d719aa8243b3a       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   f125f594dd373       busybox                                                default
	f733ba35663a8       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           54 seconds ago      Running             coredns                     1                   8bcd5d7f5a8a9       coredns-7d764666f9-9glh9                               kube-system
	1f3d8e928f780       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                           54 seconds ago      Running             kube-proxy                  1                   281a2f8b82e55       kube-proxy-95gmh                                       kube-system
	53d6f0d3be69c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         1                   149a169dd8a9d       storage-provisioner                                    kube-system
	967268c9366fc       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           54 seconds ago      Running             kindnet-cni                 1                   2c680ac066c91       kindnet-lsrww                                          kube-system
	860ee028cf3be       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           57 seconds ago      Running             etcd                        1                   5e9bb71665183       etcd-default-k8s-diff-port-500581                      kube-system
	75829447ed214       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                           57 seconds ago      Running             kube-scheduler              1                   621ead8d7d541       kube-scheduler-default-k8s-diff-port-500581            kube-system
	6a99f52e2e305       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                           57 seconds ago      Running             kube-apiserver              1                   1b54ef99af8eb       kube-apiserver-default-k8s-diff-port-500581            kube-system
	e0497769c0893       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                           57 seconds ago      Running             kube-controller-manager     1                   6d2ee4cfc0c0e       kube-controller-manager-default-k8s-diff-port-500581   kube-system
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-500581
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-500581
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba
	                    minikube.k8s.io/name=default-k8s-diff-port-500581
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_28T06_56_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Dec 2025 06:56:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-500581
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 28 Dec 2025 06:57:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 28 Dec 2025 06:57:41 +0000   Sun, 28 Dec 2025 06:56:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 28 Dec 2025 06:57:41 +0000   Sun, 28 Dec 2025 06:56:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 28 Dec 2025 06:57:41 +0000   Sun, 28 Dec 2025 06:56:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 28 Dec 2025 06:57:41 +0000   Sun, 28 Dec 2025 06:56:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-500581
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 493159aea3d8b8768b108b926950835d
	  System UUID:                b3bebd8a-2cf1-4ff4-9600-b6e76b191bd7
	  Boot ID:                    e7a1d175-ccf2-4135-b9c7-3a9f70f4c4af
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 coredns-7d764666f9-9glh9                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     103s
	  kube-system                 etcd-default-k8s-diff-port-500581                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         108s
	  kube-system                 kindnet-lsrww                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      103s
	  kube-system                 kube-apiserver-default-k8s-diff-port-500581             250m (3%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-500581    200m (2%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-proxy-95gmh                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-scheduler-default-k8s-diff-port-500581             100m (1%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-zzbsp              0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-cl9z8                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  104s  node-controller  Node default-k8s-diff-port-500581 event: Registered Node default-k8s-diff-port-500581 in Controller
	  Normal  RegisteredNode  53s   node-controller  Node default-k8s-diff-port-500581 event: Registered Node default-k8s-diff-port-500581 in Controller
	
	
	==> dmesg <==
	[Dec28 06:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001811] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000998] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.386099] i8042: Warning: Keylock active
	[  +0.010472] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.485785] block sda: the capability attribute has been deprecated.
	[  +0.082391] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024584] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.071522] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 06:57:56 up 40 min,  0 user,  load average: 4.68, 3.30, 2.02
	Linux default-k8s-diff-port-500581 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 28 06:57:13 default-k8s-diff-port-500581 kubelet[733]: E1228 06:57:13.285978     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-zzbsp_kubernetes-dashboard(e431e76d-8f1b-4a30-b6bf-d0523cc61695)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zzbsp" podUID="e431e76d-8f1b-4a30-b6bf-d0523cc61695"
	Dec 28 06:57:14 default-k8s-diff-port-500581 kubelet[733]: E1228 06:57:14.084141     733 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-default-k8s-diff-port-500581" containerName="etcd"
	Dec 28 06:57:14 default-k8s-diff-port-500581 kubelet[733]: E1228 06:57:14.288347     733 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-default-k8s-diff-port-500581" containerName="etcd"
	Dec 28 06:57:17 default-k8s-diff-port-500581 kubelet[733]: E1228 06:57:17.911972     733 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-default-k8s-diff-port-500581" containerName="kube-controller-manager"
	Dec 28 06:57:22 default-k8s-diff-port-500581 kubelet[733]: E1228 06:57:22.036276     733 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zzbsp" containerName="dashboard-metrics-scraper"
	Dec 28 06:57:22 default-k8s-diff-port-500581 kubelet[733]: I1228 06:57:22.036320     733 scope.go:122] "RemoveContainer" containerID="046f06947b70a46f2a809a32960d69f64813c32d038933ec7f376265e69983ce"
	Dec 28 06:57:23 default-k8s-diff-port-500581 kubelet[733]: I1228 06:57:23.313977     733 scope.go:122] "RemoveContainer" containerID="046f06947b70a46f2a809a32960d69f64813c32d038933ec7f376265e69983ce"
	Dec 28 06:57:23 default-k8s-diff-port-500581 kubelet[733]: E1228 06:57:23.314294     733 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zzbsp" containerName="dashboard-metrics-scraper"
	Dec 28 06:57:23 default-k8s-diff-port-500581 kubelet[733]: I1228 06:57:23.314333     733 scope.go:122] "RemoveContainer" containerID="8bee676b704e20ba8a092af2abd07537c85a7804a5673f91fc565fb6f729b4a2"
	Dec 28 06:57:23 default-k8s-diff-port-500581 kubelet[733]: E1228 06:57:23.314547     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-zzbsp_kubernetes-dashboard(e431e76d-8f1b-4a30-b6bf-d0523cc61695)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zzbsp" podUID="e431e76d-8f1b-4a30-b6bf-d0523cc61695"
	Dec 28 06:57:32 default-k8s-diff-port-500581 kubelet[733]: E1228 06:57:32.035828     733 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zzbsp" containerName="dashboard-metrics-scraper"
	Dec 28 06:57:32 default-k8s-diff-port-500581 kubelet[733]: I1228 06:57:32.035871     733 scope.go:122] "RemoveContainer" containerID="8bee676b704e20ba8a092af2abd07537c85a7804a5673f91fc565fb6f729b4a2"
	Dec 28 06:57:32 default-k8s-diff-port-500581 kubelet[733]: E1228 06:57:32.036054     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-zzbsp_kubernetes-dashboard(e431e76d-8f1b-4a30-b6bf-d0523cc61695)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zzbsp" podUID="e431e76d-8f1b-4a30-b6bf-d0523cc61695"
	Dec 28 06:57:32 default-k8s-diff-port-500581 kubelet[733]: I1228 06:57:32.339542     733 scope.go:122] "RemoveContainer" containerID="53d6f0d3be69c5bdda18486faff57085a98fc06686e0ab96dababe5245118f65"
	Dec 28 06:57:37 default-k8s-diff-port-500581 kubelet[733]: E1228 06:57:37.334845     733 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-9glh9" containerName="coredns"
	Dec 28 06:57:48 default-k8s-diff-port-500581 kubelet[733]: E1228 06:57:48.171052     733 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zzbsp" containerName="dashboard-metrics-scraper"
	Dec 28 06:57:48 default-k8s-diff-port-500581 kubelet[733]: I1228 06:57:48.171113     733 scope.go:122] "RemoveContainer" containerID="8bee676b704e20ba8a092af2abd07537c85a7804a5673f91fc565fb6f729b4a2"
	Dec 28 06:57:48 default-k8s-diff-port-500581 kubelet[733]: I1228 06:57:48.382023     733 scope.go:122] "RemoveContainer" containerID="8bee676b704e20ba8a092af2abd07537c85a7804a5673f91fc565fb6f729b4a2"
	Dec 28 06:57:48 default-k8s-diff-port-500581 kubelet[733]: E1228 06:57:48.382329     733 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zzbsp" containerName="dashboard-metrics-scraper"
	Dec 28 06:57:48 default-k8s-diff-port-500581 kubelet[733]: I1228 06:57:48.382364     733 scope.go:122] "RemoveContainer" containerID="7673bd8aaca68e0213430ba3209522465063644a7d1f618f59fd74d8cedc116c"
	Dec 28 06:57:48 default-k8s-diff-port-500581 kubelet[733]: E1228 06:57:48.382562     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-zzbsp_kubernetes-dashboard(e431e76d-8f1b-4a30-b6bf-d0523cc61695)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zzbsp" podUID="e431e76d-8f1b-4a30-b6bf-d0523cc61695"
	Dec 28 06:57:51 default-k8s-diff-port-500581 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 28 06:57:51 default-k8s-diff-port-500581 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 28 06:57:51 default-k8s-diff-port-500581 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 06:57:51 default-k8s-diff-port-500581 systemd[1]: kubelet.service: Consumed 1.782s CPU time.
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1228 06:57:55.601443  281148 logs.go:279] Failed to list containers for "kube-apiserver": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:55Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:55.661908  281148 logs.go:279] Failed to list containers for "etcd": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:55Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:55.739106  281148 logs.go:279] Failed to list containers for "coredns": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:55Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:55.808071  281148 logs.go:279] Failed to list containers for "kube-scheduler": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:55Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:55.872933  281148 logs.go:279] Failed to list containers for "kube-proxy": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:55Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:55.944528  281148 logs.go:279] Failed to list containers for "kube-controller-manager": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:55Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:56.007801  281148 logs.go:279] Failed to list containers for "kindnet": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:56Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:56.076318  281148 logs.go:279] Failed to list containers for "kubernetes-dashboard": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:56Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:56.155241  281148 logs.go:279] Failed to list containers for "storage-provisioner": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:56Z" level=error msg="open /run/runc: no such file or directory"

                                                
                                                
** /stderr **
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-500581 -n default-k8s-diff-port-500581
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-500581 -n default-k8s-diff-port-500581: exit status 2 (345.558354ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-500581 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (5.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (5.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-422591 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-422591 --alsologtostderr -v=1: exit status 80 (2.09681114s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-422591 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:57:51.464865  278942 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:57:51.464985  278942 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:57:51.464995  278942 out.go:374] Setting ErrFile to fd 2...
	I1228 06:57:51.464999  278942 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:57:51.465212  278942 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:57:51.465496  278942 out.go:368] Setting JSON to false
	I1228 06:57:51.465514  278942 mustload.go:66] Loading cluster: embed-certs-422591
	I1228 06:57:51.465872  278942 config.go:182] Loaded profile config "embed-certs-422591": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:57:51.466373  278942 cli_runner.go:164] Run: docker container inspect embed-certs-422591 --format={{.State.Status}}
	I1228 06:57:51.486173  278942 host.go:66] Checking if "embed-certs-422591" exists ...
	I1228 06:57:51.486475  278942 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:57:51.541194  278942 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:80 SystemTime:2025-12-28 06:57:51.531427767 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:57:51.541818  278942 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22351/minikube-v1.37.0-1766883634-22351-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766883634-22351/minikube-v1.37.0-1766883634-22351-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766883634-22351-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:embed-certs-422591 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(boo
l=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1228 06:57:51.652510  278942 out.go:179] * Pausing node embed-certs-422591 ... 
	I1228 06:57:51.782142  278942 host.go:66] Checking if "embed-certs-422591" exists ...
	I1228 06:57:51.782525  278942 ssh_runner.go:195] Run: systemctl --version
	I1228 06:57:51.782588  278942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422591
	I1228 06:57:51.803344  278942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/embed-certs-422591/id_rsa Username:docker}
	I1228 06:57:51.893822  278942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:57:51.925855  278942 pause.go:52] kubelet running: true
	I1228 06:57:51.925929  278942 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1228 06:57:52.848329  278942 ssh_runner.go:195] Run: sudo crio config
	I1228 06:57:52.906329  278942 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:57:52.924224  278942 retry.go:84] will retry after 200ms: list running: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:52Z" level=error msg="open /run/runc: no such file or directory"
	I1228 06:57:53.161664  278942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:57:53.175933  278942 pause.go:52] kubelet running: false
	I1228 06:57:53.175992  278942 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1228 06:57:53.384849  278942 ssh_runner.go:195] Run: sudo crio config
	I1228 06:57:53.467201  278942 ssh_runner.go:195] Run: sudo runc --root /run/runc list -f json
	I1228 06:57:53.488381  278942 out.go:203] 
	W1228 06:57:53.489918  278942 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1228 06:57:53.489945  278942 out.go:285] * 
	* 
	W1228 06:57:53.492705  278942 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1228 06:57:53.495707  278942 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-422591 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-422591
helpers_test.go:244: (dbg) docker inspect embed-certs-422591:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ceaa376452cd4a7bcca9492d34bd5d364cb5ab63050b743bf10cdfb3e5e115af",
	        "Created": "2025-12-28T06:55:48.729729272Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 260543,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-28T06:56:49.759337515Z",
	            "FinishedAt": "2025-12-28T06:56:48.26540459Z"
	        },
	        "Image": "sha256:8b8cccb9afb2a57c3d011fcf33e0403b1551aa7036e30b12a395646869801935",
	        "ResolvConfPath": "/var/lib/docker/containers/ceaa376452cd4a7bcca9492d34bd5d364cb5ab63050b743bf10cdfb3e5e115af/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ceaa376452cd4a7bcca9492d34bd5d364cb5ab63050b743bf10cdfb3e5e115af/hostname",
	        "HostsPath": "/var/lib/docker/containers/ceaa376452cd4a7bcca9492d34bd5d364cb5ab63050b743bf10cdfb3e5e115af/hosts",
	        "LogPath": "/var/lib/docker/containers/ceaa376452cd4a7bcca9492d34bd5d364cb5ab63050b743bf10cdfb3e5e115af/ceaa376452cd4a7bcca9492d34bd5d364cb5ab63050b743bf10cdfb3e5e115af-json.log",
	        "Name": "/embed-certs-422591",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-422591:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-422591",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ceaa376452cd4a7bcca9492d34bd5d364cb5ab63050b743bf10cdfb3e5e115af",
	                "LowerDir": "/var/lib/docker/overlay2/aa50c03544bef69bef974a2d5c791199be0e99174b206655dc5df29bb78e3943-init/diff:/var/lib/docker/overlay2/69e554713d6cc3cb33e7ea5f93430536a8ca0db38320574d3719c26f00b2f62c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/aa50c03544bef69bef974a2d5c791199be0e99174b206655dc5df29bb78e3943/merged",
	                "UpperDir": "/var/lib/docker/overlay2/aa50c03544bef69bef974a2d5c791199be0e99174b206655dc5df29bb78e3943/diff",
	                "WorkDir": "/var/lib/docker/overlay2/aa50c03544bef69bef974a2d5c791199be0e99174b206655dc5df29bb78e3943/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-422591",
	                "Source": "/var/lib/docker/volumes/embed-certs-422591/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-422591",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-422591",
	                "name.minikube.sigs.k8s.io": "embed-certs-422591",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "1856baf06335a2dc4443c166a0036a879b1f5f07f6464323cb1fcad3c838a11c",
	            "SandboxKey": "/var/run/docker/netns/1856baf06335",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-422591": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4435fbd1d5af1aad2bc3ae8af8af55a14dd14ed989f116744286ee3cfc1b4c5c",
	                    "EndpointID": "34aeca0b180c67c20d06af58860d00b10da2999d2c9278ccf5ee85bc2f1016d5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "9e:58:91:32:74:7a",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-422591",
	                        "ceaa376452cd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-422591 -n embed-certs-422591
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-422591 -n embed-certs-422591: exit status 2 (348.223933ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-422591 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-422591 logs -n 25: (1.081813723s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-694122                                                                                                                                                                                                                     │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ addons  │ enable dashboard -p embed-certs-422591 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ start   │ -p embed-certs-422591 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:57 UTC │
	│ delete  │ -p old-k8s-version-694122                                                                                                                                                                                                                     │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ start   │ -p newest-cni-479871 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:57 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-500581 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ start   │ -p default-k8s-diff-port-500581 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:57 UTC │
	│ image   │ no-preload-950460 image list --format=json                                                                                                                                                                                                    │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ pause   │ -p no-preload-950460 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-479871 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │                     │
	│ stop    │ -p newest-cni-479871 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ delete  │ -p no-preload-950460                                                                                                                                                                                                                          │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ delete  │ -p no-preload-950460                                                                                                                                                                                                                          │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ start   │ -p auto-610916 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-610916                  │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-479871 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ start   │ -p newest-cni-479871 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ image   │ newest-cni-479871 image list --format=json                                                                                                                                                                                                    │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ pause   │ -p newest-cni-479871 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │                     │
	│ delete  │ -p newest-cni-479871                                                                                                                                                                                                                          │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ delete  │ -p newest-cni-479871                                                                                                                                                                                                                          │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ start   │ -p kindnet-610916 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                      │ kindnet-610916               │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │                     │
	│ image   │ default-k8s-diff-port-500581 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ pause   │ -p default-k8s-diff-port-500581 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │                     │
	│ image   │ embed-certs-422591 image list --format=json                                                                                                                                                                                                   │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ pause   │ -p embed-certs-422591 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 06:57:48
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 06:57:48.092140  278228 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:57:48.092378  278228 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:57:48.092386  278228 out.go:374] Setting ErrFile to fd 2...
	I1228 06:57:48.092390  278228 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:57:48.092563  278228 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:57:48.093025  278228 out.go:368] Setting JSON to false
	I1228 06:57:48.094218  278228 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2420,"bootTime":1766902648,"procs":501,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1228 06:57:48.094271  278228 start.go:143] virtualization: kvm guest
	I1228 06:57:48.096346  278228 out.go:179] * [kindnet-610916] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1228 06:57:48.097696  278228 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 06:57:48.097711  278228 notify.go:221] Checking for updates...
	I1228 06:57:48.099961  278228 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 06:57:48.101206  278228 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:57:48.102372  278228 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-5550/.minikube
	I1228 06:57:48.103893  278228 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1228 06:57:48.105015  278228 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 06:57:48.106663  278228 config.go:182] Loaded profile config "auto-610916": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:57:48.106765  278228 config.go:182] Loaded profile config "default-k8s-diff-port-500581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:57:48.106861  278228 config.go:182] Loaded profile config "embed-certs-422591": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:57:48.106961  278228 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 06:57:48.131005  278228 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1228 06:57:48.131171  278228 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:57:48.189150  278228 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-28 06:57:48.178186648 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:57:48.189241  278228 docker.go:319] overlay module found
	I1228 06:57:48.191304  278228 out.go:179] * Using the docker driver based on user configuration
	I1228 06:57:48.193249  278228 start.go:309] selected driver: docker
	I1228 06:57:48.193265  278228 start.go:928] validating driver "docker" against <nil>
	I1228 06:57:48.193284  278228 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 06:57:48.193804  278228 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:57:48.254205  278228 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-28 06:57:48.244834861 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:57:48.254370  278228 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1228 06:57:48.254573  278228 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 06:57:48.255833  278228 out.go:179] * Using Docker driver with root privileges
	I1228 06:57:48.256951  278228 cni.go:84] Creating CNI manager for "kindnet"
	I1228 06:57:48.256968  278228 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1228 06:57:48.257020  278228 start.go:353] cluster config:
	{Name:kindnet-610916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:kindnet-610916 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:57:48.258226  278228 out.go:179] * Starting "kindnet-610916" primary control-plane node in "kindnet-610916" cluster
	I1228 06:57:48.259087  278228 cache.go:134] Beginning downloading kic base image for docker with crio
	I1228 06:57:48.260119  278228 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 06:57:48.261011  278228 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1228 06:57:48.261053  278228 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22352-5550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1228 06:57:48.261062  278228 cache.go:65] Caching tarball of preloaded images
	I1228 06:57:48.261099  278228 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 06:57:48.261144  278228 preload.go:251] Found /home/jenkins/minikube-integration/22352-5550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1228 06:57:48.261157  278228 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1228 06:57:48.261238  278228 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/kindnet-610916/config.json ...
	I1228 06:57:48.261266  278228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/kindnet-610916/config.json: {Name:mk0bc80a535dbef6153fe5637e5a21a1797ea2f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:57:48.282470  278228 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 06:57:48.282492  278228 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
	I1228 06:57:48.282511  278228 cache.go:243] Successfully downloaded all kic artifacts
	I1228 06:57:48.282540  278228 start.go:360] acquireMachinesLock for kindnet-610916: {Name:mk606eee79fd57ff798c0475285cd3fc5d0868a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:57:48.282648  278228 start.go:364] duration metric: took 90.358µs to acquireMachinesLock for "kindnet-610916"
	I1228 06:57:48.282689  278228 start.go:93] Provisioning new machine with config: &{Name:kindnet-610916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:kindnet-610916 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1228 06:57:48.282765  278228 start.go:125] createHost starting for "" (driver="docker")
	W1228 06:57:44.957551  270987 node_ready.go:57] node "auto-610916" has "Ready":"False" status (will retry)
	W1228 06:57:47.457801  270987 node_ready.go:57] node "auto-610916" has "Ready":"False" status (will retry)
	I1228 06:57:48.285477  278228 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1228 06:57:48.285727  278228 start.go:159] libmachine.API.Create for "kindnet-610916" (driver="docker")
	I1228 06:57:48.285762  278228 client.go:173] LocalClient.Create starting
	I1228 06:57:48.285841  278228 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem
	I1228 06:57:48.285884  278228 main.go:144] libmachine: Decoding PEM data...
	I1228 06:57:48.285904  278228 main.go:144] libmachine: Parsing certificate...
	I1228 06:57:48.285963  278228 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem
	I1228 06:57:48.285990  278228 main.go:144] libmachine: Decoding PEM data...
	I1228 06:57:48.286003  278228 main.go:144] libmachine: Parsing certificate...
	I1228 06:57:48.286341  278228 cli_runner.go:164] Run: docker network inspect kindnet-610916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1228 06:57:48.302864  278228 cli_runner.go:211] docker network inspect kindnet-610916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1228 06:57:48.302927  278228 network_create.go:284] running [docker network inspect kindnet-610916] to gather additional debugging logs...
	I1228 06:57:48.302943  278228 cli_runner.go:164] Run: docker network inspect kindnet-610916
	W1228 06:57:48.319307  278228 cli_runner.go:211] docker network inspect kindnet-610916 returned with exit code 1
	I1228 06:57:48.319353  278228 network_create.go:287] error running [docker network inspect kindnet-610916]: docker network inspect kindnet-610916: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-610916 not found
	I1228 06:57:48.319372  278228 network_create.go:289] output of [docker network inspect kindnet-610916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-610916 not found
	
	** /stderr **
	I1228 06:57:48.319446  278228 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 06:57:48.336850  278228 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-83d3c063481b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ca:56:51:df:60:88} reservation:<nil>}
	I1228 06:57:48.337829  278228 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-94477def059b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:5a:82:84:46:ba:6c} reservation:<nil>}
	I1228 06:57:48.338770  278228 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-76f4b09d664b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9a:e7:39:af:62:68} reservation:<nil>}
	I1228 06:57:48.339438  278228 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4435fbd1d5af IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:56:c5:3b:23:f3:bc} reservation:<nil>}
	I1228 06:57:48.340434  278228 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eb5ad0}
	I1228 06:57:48.340461  278228 network_create.go:124] attempt to create docker network kindnet-610916 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1228 06:57:48.340506  278228 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-610916 kindnet-610916
	I1228 06:57:48.395723  278228 network_create.go:108] docker network kindnet-610916 192.168.85.0/24 created
	I1228 06:57:48.395757  278228 kic.go:121] calculated static IP "192.168.85.2" for the "kindnet-610916" container
	I1228 06:57:48.395824  278228 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1228 06:57:48.417714  278228 cli_runner.go:164] Run: docker volume create kindnet-610916 --label name.minikube.sigs.k8s.io=kindnet-610916 --label created_by.minikube.sigs.k8s.io=true
	I1228 06:57:48.439257  278228 oci.go:103] Successfully created a docker volume kindnet-610916
	I1228 06:57:48.439339  278228 cli_runner.go:164] Run: docker run --rm --name kindnet-610916-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-610916 --entrypoint /usr/bin/test -v kindnet-610916:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -d /var/lib
	I1228 06:57:48.836799  278228 oci.go:107] Successfully prepared a docker volume kindnet-610916
	I1228 06:57:48.836882  278228 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1228 06:57:48.836910  278228 kic.go:194] Starting extracting preloaded images to volume ...
	I1228 06:57:48.836980  278228 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22352-5550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-610916:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1228 06:57:52.712727  278228 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22352-5550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-610916:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.875692449s)
	I1228 06:57:52.712770  278228 kic.go:203] duration metric: took 3.875856935s to extract preloaded images to volume ...
	W1228 06:57:52.712882  278228 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1228 06:57:52.712922  278228 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1228 06:57:52.712974  278228 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1228 06:57:52.777238  278228 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-610916 --name kindnet-610916 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-610916 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-610916 --network kindnet-610916 --ip 192.168.85.2 --volume kindnet-610916:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1
	I1228 06:57:53.076304  278228 cli_runner.go:164] Run: docker container inspect kindnet-610916 --format={{.State.Running}}
	
	
	==> CRI-O <==
	Dec 28 06:57:19 embed-certs-422591 crio[569]: time="2025-12-28T06:57:19.149907901Z" level=info msg="Started container" PID=1784 containerID=53c5f983861efda1e9c539c2ebaae1c817f5e97e6c506a16f6369c9a2193caac description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-cltr8/dashboard-metrics-scraper id=fa3aa926-0c30-4cd6-8a49-46a3ff7023cb name=/runtime.v1.RuntimeService/StartContainer sandboxID=0d8e55582aaded9a94c910f23feb0845d7c84c913e8550cc713cbbebad2cb273
	Dec 28 06:57:19 embed-certs-422591 crio[569]: time="2025-12-28T06:57:19.206188609Z" level=info msg="Removing container: 77a5e9c6df76701aa86457d72d0b8ed7235fee8620f867a0542a8026e0567549" id=7bf15b82-100b-47e7-8189-fcb6d4c50f72 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 28 06:57:19 embed-certs-422591 crio[569]: time="2025-12-28T06:57:19.215731685Z" level=info msg="Removed container 77a5e9c6df76701aa86457d72d0b8ed7235fee8620f867a0542a8026e0567549: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-cltr8/dashboard-metrics-scraper" id=7bf15b82-100b-47e7-8189-fcb6d4c50f72 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 28 06:57:30 embed-certs-422591 crio[569]: time="2025-12-28T06:57:30.236497491Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=4a7073f8-e3d5-471b-a79b-778baae97f60 name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:57:30 embed-certs-422591 crio[569]: time="2025-12-28T06:57:30.237552667Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=5aaea0cd-c3be-44de-991e-ecfccc20fd3a name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:57:30 embed-certs-422591 crio[569]: time="2025-12-28T06:57:30.238622539Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=6c9cf3ea-7e3b-4917-9e32-cd77bc153ffe name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:57:30 embed-certs-422591 crio[569]: time="2025-12-28T06:57:30.238785626Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:30 embed-certs-422591 crio[569]: time="2025-12-28T06:57:30.243946483Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:30 embed-certs-422591 crio[569]: time="2025-12-28T06:57:30.244177667Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/edeefaf0b66864667b2cf5309848f4de1ffd7662a94ff697a5c7c1b0255b7b13/merged/etc/passwd: no such file or directory"
	Dec 28 06:57:30 embed-certs-422591 crio[569]: time="2025-12-28T06:57:30.244218312Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/edeefaf0b66864667b2cf5309848f4de1ffd7662a94ff697a5c7c1b0255b7b13/merged/etc/group: no such file or directory"
	Dec 28 06:57:30 embed-certs-422591 crio[569]: time="2025-12-28T06:57:30.245722098Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:30 embed-certs-422591 crio[569]: time="2025-12-28T06:57:30.281491891Z" level=info msg="Created container 33848c1c8d16f600b535f0d6444ca5211eb608b0a2a9d27f2c59d60af9883a3d: kube-system/storage-provisioner/storage-provisioner" id=6c9cf3ea-7e3b-4917-9e32-cd77bc153ffe name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:57:30 embed-certs-422591 crio[569]: time="2025-12-28T06:57:30.28217866Z" level=info msg="Starting container: 33848c1c8d16f600b535f0d6444ca5211eb608b0a2a9d27f2c59d60af9883a3d" id=f839828e-ba04-41c6-89f5-7c8a568c1e47 name=/runtime.v1.RuntimeService/StartContainer
	Dec 28 06:57:30 embed-certs-422591 crio[569]: time="2025-12-28T06:57:30.283807362Z" level=info msg="Started container" PID=1799 containerID=33848c1c8d16f600b535f0d6444ca5211eb608b0a2a9d27f2c59d60af9883a3d description=kube-system/storage-provisioner/storage-provisioner id=f839828e-ba04-41c6-89f5-7c8a568c1e47 name=/runtime.v1.RuntimeService/StartContainer sandboxID=678d8c12d440e697d053b48e1ab524234b9cb8247d07958dd2429a4b6e5c3ac5
	Dec 28 06:57:44 embed-certs-422591 crio[569]: time="2025-12-28T06:57:44.106325743Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=cb27ba15-82fe-4c0c-9917-ab98849b8e4b name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:57:44 embed-certs-422591 crio[569]: time="2025-12-28T06:57:44.107451416Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9edec0b9-8855-4bfd-a6a9-efce7f862e66 name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:57:44 embed-certs-422591 crio[569]: time="2025-12-28T06:57:44.108536638Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-cltr8/dashboard-metrics-scraper" id=f44287be-e5fe-45e8-aad4-6cee65728e99 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:57:44 embed-certs-422591 crio[569]: time="2025-12-28T06:57:44.108677868Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:44 embed-certs-422591 crio[569]: time="2025-12-28T06:57:44.115883989Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:44 embed-certs-422591 crio[569]: time="2025-12-28T06:57:44.116703256Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:44 embed-certs-422591 crio[569]: time="2025-12-28T06:57:44.148904852Z" level=info msg="Created container c331b6bc0e55d87e2a7e723a00d12efb8fe0658e69f093be34fc3b00cfe50132: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-cltr8/dashboard-metrics-scraper" id=f44287be-e5fe-45e8-aad4-6cee65728e99 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:57:44 embed-certs-422591 crio[569]: time="2025-12-28T06:57:44.149659276Z" level=info msg="Starting container: c331b6bc0e55d87e2a7e723a00d12efb8fe0658e69f093be34fc3b00cfe50132" id=c231338e-4e86-444e-ad85-83ed3156a4eb name=/runtime.v1.RuntimeService/StartContainer
	Dec 28 06:57:44 embed-certs-422591 crio[569]: time="2025-12-28T06:57:44.15190901Z" level=info msg="Started container" PID=1838 containerID=c331b6bc0e55d87e2a7e723a00d12efb8fe0658e69f093be34fc3b00cfe50132 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-cltr8/dashboard-metrics-scraper id=c231338e-4e86-444e-ad85-83ed3156a4eb name=/runtime.v1.RuntimeService/StartContainer sandboxID=0d8e55582aaded9a94c910f23feb0845d7c84c913e8550cc713cbbebad2cb273
	Dec 28 06:57:44 embed-certs-422591 crio[569]: time="2025-12-28T06:57:44.273998645Z" level=info msg="Removing container: 53c5f983861efda1e9c539c2ebaae1c817f5e97e6c506a16f6369c9a2193caac" id=16ebedd8-4ca6-4264-8d13-99a3e8e23b17 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 28 06:57:44 embed-certs-422591 crio[569]: time="2025-12-28T06:57:44.284533034Z" level=info msg="Removed container 53c5f983861efda1e9c539c2ebaae1c817f5e97e6c506a16f6369c9a2193caac: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-cltr8/dashboard-metrics-scraper" id=16ebedd8-4ca6-4264-8d13-99a3e8e23b17 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	c331b6bc0e55d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           10 seconds ago      Exited              dashboard-metrics-scraper   3                   0d8e55582aade       dashboard-metrics-scraper-867fb5f87b-cltr8   kubernetes-dashboard
	33848c1c8d16f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         2                   678d8c12d440e       storage-provisioner                          kube-system
	d6749648effaf       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   46 seconds ago      Running             kubernetes-dashboard        0                   b62e73724f676       kubernetes-dashboard-b84665fb8-h42vt         kubernetes-dashboard
	9a99e6688d733       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           55 seconds ago      Running             coredns                     1                   0c9bf13239f0e       coredns-7d764666f9-dmhdv                     kube-system
	8b14082c26eaf       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   ae09eac9878d0       busybox                                      default
	05f6fefa1a4cc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         1                   678d8c12d440e       storage-provisioner                          kube-system
	4967cdbb89308       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           55 seconds ago      Running             kindnet-cni                 1                   7da110f48bd98       kindnet-9zxtp                                kube-system
	0317791c507ee       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                           55 seconds ago      Running             kube-proxy                  1                   ef24192e0817a       kube-proxy-j2dkd                             kube-system
	bf17ccdbddfbe       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                           58 seconds ago      Running             kube-apiserver              1                   565d6159f70ae       kube-apiserver-embed-certs-422591            kube-system
	06e9a5f4cf462       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                           58 seconds ago      Running             kube-scheduler              1                   f44a1c2383659       kube-scheduler-embed-certs-422591            kube-system
	ee96a2302387d       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           58 seconds ago      Running             etcd                        1                   7e044ef83d53e       etcd-embed-certs-422591                      kube-system
	eff3bdbb5d917       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                           58 seconds ago      Running             kube-controller-manager     1                   956cc8fbc6072       kube-controller-manager-embed-certs-422591   kube-system
	
	
	==> describe nodes <==
	Name:               embed-certs-422591
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-422591
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba
	                    minikube.k8s.io/name=embed-certs-422591
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_28T06_56_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Dec 2025 06:56:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-422591
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 28 Dec 2025 06:57:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 28 Dec 2025 06:57:29 +0000   Sun, 28 Dec 2025 06:56:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 28 Dec 2025 06:57:29 +0000   Sun, 28 Dec 2025 06:56:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 28 Dec 2025 06:57:29 +0000   Sun, 28 Dec 2025 06:56:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 28 Dec 2025 06:57:29 +0000   Sun, 28 Dec 2025 06:56:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-422591
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 493159aea3d8b8768b108b926950835d
	  System UUID:                8e5f32a2-4590-4e27-9bc4-b0131e49535f
	  Boot ID:                    e7a1d175-ccf2-4135-b9c7-3a9f70f4c4af
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-7d764666f9-dmhdv                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     104s
	  kube-system                 etcd-embed-certs-422591                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         109s
	  kube-system                 kindnet-9zxtp                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-embed-certs-422591             250m (3%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-controller-manager-embed-certs-422591    200m (2%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-proxy-j2dkd                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-embed-certs-422591             100m (1%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-cltr8    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-h42vt          0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  105s  node-controller  Node embed-certs-422591 event: Registered Node embed-certs-422591 in Controller
	  Normal  RegisteredNode  53s   node-controller  Node embed-certs-422591 event: Registered Node embed-certs-422591 in Controller
	
	
	==> dmesg <==
	[Dec28 06:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001811] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000998] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.386099] i8042: Warning: Keylock active
	[  +0.010472] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.485785] block sda: the capability attribute has been deprecated.
	[  +0.082391] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024584] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.071522] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 06:57:54 up 40 min,  0 user,  load average: 4.68, 3.30, 2.02
	Linux embed-certs-422591 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 28 06:57:13 embed-certs-422591 kubelet[733]: E1228 06:57:13.186480     733 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-embed-certs-422591" containerName="kube-apiserver"
	Dec 28 06:57:19 embed-certs-422591 kubelet[733]: E1228 06:57:19.105711     733 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-cltr8" containerName="dashboard-metrics-scraper"
	Dec 28 06:57:19 embed-certs-422591 kubelet[733]: I1228 06:57:19.105747     733 scope.go:122] "RemoveContainer" containerID="77a5e9c6df76701aa86457d72d0b8ed7235fee8620f867a0542a8026e0567549"
	Dec 28 06:57:19 embed-certs-422591 kubelet[733]: I1228 06:57:19.204885     733 scope.go:122] "RemoveContainer" containerID="77a5e9c6df76701aa86457d72d0b8ed7235fee8620f867a0542a8026e0567549"
	Dec 28 06:57:19 embed-certs-422591 kubelet[733]: E1228 06:57:19.205110     733 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-cltr8" containerName="dashboard-metrics-scraper"
	Dec 28 06:57:19 embed-certs-422591 kubelet[733]: I1228 06:57:19.205146     733 scope.go:122] "RemoveContainer" containerID="53c5f983861efda1e9c539c2ebaae1c817f5e97e6c506a16f6369c9a2193caac"
	Dec 28 06:57:19 embed-certs-422591 kubelet[733]: E1228 06:57:19.205349     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-cltr8_kubernetes-dashboard(b545257e-e7ac-4504-a190-f74f706b2d14)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-cltr8" podUID="b545257e-e7ac-4504-a190-f74f706b2d14"
	Dec 28 06:57:28 embed-certs-422591 kubelet[733]: E1228 06:57:28.379232     733 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-cltr8" containerName="dashboard-metrics-scraper"
	Dec 28 06:57:28 embed-certs-422591 kubelet[733]: I1228 06:57:28.379290     733 scope.go:122] "RemoveContainer" containerID="53c5f983861efda1e9c539c2ebaae1c817f5e97e6c506a16f6369c9a2193caac"
	Dec 28 06:57:28 embed-certs-422591 kubelet[733]: E1228 06:57:28.379547     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-cltr8_kubernetes-dashboard(b545257e-e7ac-4504-a190-f74f706b2d14)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-cltr8" podUID="b545257e-e7ac-4504-a190-f74f706b2d14"
	Dec 28 06:57:30 embed-certs-422591 kubelet[733]: I1228 06:57:30.236055     733 scope.go:122] "RemoveContainer" containerID="05f6fefa1a4cc0a1d42feee46f8d516af845351bd5e66cb6e37345b07efb156b"
	Dec 28 06:57:37 embed-certs-422591 kubelet[733]: E1228 06:57:37.838218     733 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-dmhdv" containerName="coredns"
	Dec 28 06:57:44 embed-certs-422591 kubelet[733]: E1228 06:57:44.105487     733 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-cltr8" containerName="dashboard-metrics-scraper"
	Dec 28 06:57:44 embed-certs-422591 kubelet[733]: I1228 06:57:44.105532     733 scope.go:122] "RemoveContainer" containerID="53c5f983861efda1e9c539c2ebaae1c817f5e97e6c506a16f6369c9a2193caac"
	Dec 28 06:57:44 embed-certs-422591 kubelet[733]: I1228 06:57:44.271949     733 scope.go:122] "RemoveContainer" containerID="53c5f983861efda1e9c539c2ebaae1c817f5e97e6c506a16f6369c9a2193caac"
	Dec 28 06:57:44 embed-certs-422591 kubelet[733]: E1228 06:57:44.272220     733 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-cltr8" containerName="dashboard-metrics-scraper"
	Dec 28 06:57:44 embed-certs-422591 kubelet[733]: I1228 06:57:44.272258     733 scope.go:122] "RemoveContainer" containerID="c331b6bc0e55d87e2a7e723a00d12efb8fe0658e69f093be34fc3b00cfe50132"
	Dec 28 06:57:44 embed-certs-422591 kubelet[733]: E1228 06:57:44.272473     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-cltr8_kubernetes-dashboard(b545257e-e7ac-4504-a190-f74f706b2d14)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-cltr8" podUID="b545257e-e7ac-4504-a190-f74f706b2d14"
	Dec 28 06:57:48 embed-certs-422591 kubelet[733]: E1228 06:57:48.378829     733 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-cltr8" containerName="dashboard-metrics-scraper"
	Dec 28 06:57:48 embed-certs-422591 kubelet[733]: I1228 06:57:48.378877     733 scope.go:122] "RemoveContainer" containerID="c331b6bc0e55d87e2a7e723a00d12efb8fe0658e69f093be34fc3b00cfe50132"
	Dec 28 06:57:48 embed-certs-422591 kubelet[733]: E1228 06:57:48.379147     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-cltr8_kubernetes-dashboard(b545257e-e7ac-4504-a190-f74f706b2d14)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-cltr8" podUID="b545257e-e7ac-4504-a190-f74f706b2d14"
	Dec 28 06:57:52 embed-certs-422591 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 28 06:57:52 embed-certs-422591 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 28 06:57:52 embed-certs-422591 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 06:57:52 embed-certs-422591 systemd[1]: kubelet.service: Consumed 1.892s CPU time.
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1228 06:57:54.207244  280319 logs.go:279] Failed to list containers for "kube-apiserver": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:54Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:54.271498  280319 logs.go:279] Failed to list containers for "etcd": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:54Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:54.335232  280319 logs.go:279] Failed to list containers for "coredns": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:54Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:54.405408  280319 logs.go:279] Failed to list containers for "kube-scheduler": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:54Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:54.467931  280319 logs.go:279] Failed to list containers for "kube-proxy": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:54Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:54.536779  280319 logs.go:279] Failed to list containers for "kube-controller-manager": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:54Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:54.607192  280319 logs.go:279] Failed to list containers for "kindnet": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:54Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:54.669216  280319 logs.go:279] Failed to list containers for "kubernetes-dashboard": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:54Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:54.743198  280319 logs.go:279] Failed to list containers for "storage-provisioner": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:54Z" level=error msg="open /run/runc: no such file or directory"

                                                
                                                
** /stderr **
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-422591 -n embed-certs-422591
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-422591 -n embed-certs-422591: exit status 2 (354.921886ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-422591 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-422591
helpers_test.go:244: (dbg) docker inspect embed-certs-422591:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ceaa376452cd4a7bcca9492d34bd5d364cb5ab63050b743bf10cdfb3e5e115af",
	        "Created": "2025-12-28T06:55:48.729729272Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 260543,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-28T06:56:49.759337515Z",
	            "FinishedAt": "2025-12-28T06:56:48.26540459Z"
	        },
	        "Image": "sha256:8b8cccb9afb2a57c3d011fcf33e0403b1551aa7036e30b12a395646869801935",
	        "ResolvConfPath": "/var/lib/docker/containers/ceaa376452cd4a7bcca9492d34bd5d364cb5ab63050b743bf10cdfb3e5e115af/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ceaa376452cd4a7bcca9492d34bd5d364cb5ab63050b743bf10cdfb3e5e115af/hostname",
	        "HostsPath": "/var/lib/docker/containers/ceaa376452cd4a7bcca9492d34bd5d364cb5ab63050b743bf10cdfb3e5e115af/hosts",
	        "LogPath": "/var/lib/docker/containers/ceaa376452cd4a7bcca9492d34bd5d364cb5ab63050b743bf10cdfb3e5e115af/ceaa376452cd4a7bcca9492d34bd5d364cb5ab63050b743bf10cdfb3e5e115af-json.log",
	        "Name": "/embed-certs-422591",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-422591:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-422591",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ceaa376452cd4a7bcca9492d34bd5d364cb5ab63050b743bf10cdfb3e5e115af",
	                "LowerDir": "/var/lib/docker/overlay2/aa50c03544bef69bef974a2d5c791199be0e99174b206655dc5df29bb78e3943-init/diff:/var/lib/docker/overlay2/69e554713d6cc3cb33e7ea5f93430536a8ca0db38320574d3719c26f00b2f62c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/aa50c03544bef69bef974a2d5c791199be0e99174b206655dc5df29bb78e3943/merged",
	                "UpperDir": "/var/lib/docker/overlay2/aa50c03544bef69bef974a2d5c791199be0e99174b206655dc5df29bb78e3943/diff",
	                "WorkDir": "/var/lib/docker/overlay2/aa50c03544bef69bef974a2d5c791199be0e99174b206655dc5df29bb78e3943/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-422591",
	                "Source": "/var/lib/docker/volumes/embed-certs-422591/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-422591",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-422591",
	                "name.minikube.sigs.k8s.io": "embed-certs-422591",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "1856baf06335a2dc4443c166a0036a879b1f5f07f6464323cb1fcad3c838a11c",
	            "SandboxKey": "/var/run/docker/netns/1856baf06335",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-422591": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4435fbd1d5af1aad2bc3ae8af8af55a14dd14ed989f116744286ee3cfc1b4c5c",
	                    "EndpointID": "34aeca0b180c67c20d06af58860d00b10da2999d2c9278ccf5ee85bc2f1016d5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "9e:58:91:32:74:7a",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-422591",
	                        "ceaa376452cd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-422591 -n embed-certs-422591
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-422591 -n embed-certs-422591: exit status 2 (354.86429ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-422591 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-422591 logs -n 25: (1.121468158s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-694122                                                                                                                                                                                                                     │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ addons  │ enable dashboard -p embed-certs-422591 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ start   │ -p embed-certs-422591 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:57 UTC │
	│ delete  │ -p old-k8s-version-694122                                                                                                                                                                                                                     │ old-k8s-version-694122       │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ start   │ -p newest-cni-479871 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:57 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-500581 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:56 UTC │
	│ start   │ -p default-k8s-diff-port-500581 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:56 UTC │ 28 Dec 25 06:57 UTC │
	│ image   │ no-preload-950460 image list --format=json                                                                                                                                                                                                    │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ pause   │ -p no-preload-950460 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-479871 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │                     │
	│ stop    │ -p newest-cni-479871 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ delete  │ -p no-preload-950460                                                                                                                                                                                                                          │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ delete  │ -p no-preload-950460                                                                                                                                                                                                                          │ no-preload-950460            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ start   │ -p auto-610916 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-610916                  │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-479871 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ start   │ -p newest-cni-479871 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ image   │ newest-cni-479871 image list --format=json                                                                                                                                                                                                    │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ pause   │ -p newest-cni-479871 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │                     │
	│ delete  │ -p newest-cni-479871                                                                                                                                                                                                                          │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ delete  │ -p newest-cni-479871                                                                                                                                                                                                                          │ newest-cni-479871            │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ start   │ -p kindnet-610916 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                      │ kindnet-610916               │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │                     │
	│ image   │ default-k8s-diff-port-500581 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ pause   │ -p default-k8s-diff-port-500581 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-500581 │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │                     │
	│ image   │ embed-certs-422591 image list --format=json                                                                                                                                                                                                   │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │ 28 Dec 25 06:57 UTC │
	│ pause   │ -p embed-certs-422591 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-422591           │ jenkins │ v1.37.0 │ 28 Dec 25 06:57 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 06:57:48
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 06:57:48.092140  278228 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:57:48.092378  278228 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:57:48.092386  278228 out.go:374] Setting ErrFile to fd 2...
	I1228 06:57:48.092390  278228 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:57:48.092563  278228 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:57:48.093025  278228 out.go:368] Setting JSON to false
	I1228 06:57:48.094218  278228 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2420,"bootTime":1766902648,"procs":501,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1228 06:57:48.094271  278228 start.go:143] virtualization: kvm guest
	I1228 06:57:48.096346  278228 out.go:179] * [kindnet-610916] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1228 06:57:48.097696  278228 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 06:57:48.097711  278228 notify.go:221] Checking for updates...
	I1228 06:57:48.099961  278228 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 06:57:48.101206  278228 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:57:48.102372  278228 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-5550/.minikube
	I1228 06:57:48.103893  278228 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1228 06:57:48.105015  278228 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 06:57:48.106663  278228 config.go:182] Loaded profile config "auto-610916": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:57:48.106765  278228 config.go:182] Loaded profile config "default-k8s-diff-port-500581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:57:48.106861  278228 config.go:182] Loaded profile config "embed-certs-422591": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:57:48.106961  278228 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 06:57:48.131005  278228 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1228 06:57:48.131171  278228 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:57:48.189150  278228 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-28 06:57:48.178186648 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:57:48.189241  278228 docker.go:319] overlay module found
	I1228 06:57:48.191304  278228 out.go:179] * Using the docker driver based on user configuration
	I1228 06:57:48.193249  278228 start.go:309] selected driver: docker
	I1228 06:57:48.193265  278228 start.go:928] validating driver "docker" against <nil>
	I1228 06:57:48.193284  278228 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 06:57:48.193804  278228 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:57:48.254205  278228 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-28 06:57:48.244834861 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:57:48.254370  278228 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1228 06:57:48.254573  278228 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1228 06:57:48.255833  278228 out.go:179] * Using Docker driver with root privileges
	I1228 06:57:48.256951  278228 cni.go:84] Creating CNI manager for "kindnet"
	I1228 06:57:48.256968  278228 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1228 06:57:48.257020  278228 start.go:353] cluster config:
	{Name:kindnet-610916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:kindnet-610916 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:57:48.258226  278228 out.go:179] * Starting "kindnet-610916" primary control-plane node in "kindnet-610916" cluster
	I1228 06:57:48.259087  278228 cache.go:134] Beginning downloading kic base image for docker with crio
	I1228 06:57:48.260119  278228 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 06:57:48.261011  278228 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1228 06:57:48.261053  278228 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22352-5550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1228 06:57:48.261062  278228 cache.go:65] Caching tarball of preloaded images
	I1228 06:57:48.261099  278228 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 06:57:48.261144  278228 preload.go:251] Found /home/jenkins/minikube-integration/22352-5550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1228 06:57:48.261157  278228 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1228 06:57:48.261238  278228 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/kindnet-610916/config.json ...
	I1228 06:57:48.261266  278228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/kindnet-610916/config.json: {Name:mk0bc80a535dbef6153fe5637e5a21a1797ea2f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:57:48.282470  278228 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 06:57:48.282492  278228 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
	I1228 06:57:48.282511  278228 cache.go:243] Successfully downloaded all kic artifacts
	I1228 06:57:48.282540  278228 start.go:360] acquireMachinesLock for kindnet-610916: {Name:mk606eee79fd57ff798c0475285cd3fc5d0868a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 06:57:48.282648  278228 start.go:364] duration metric: took 90.358µs to acquireMachinesLock for "kindnet-610916"
	I1228 06:57:48.282689  278228 start.go:93] Provisioning new machine with config: &{Name:kindnet-610916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:kindnet-610916 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1228 06:57:48.282765  278228 start.go:125] createHost starting for "" (driver="docker")
	W1228 06:57:44.957551  270987 node_ready.go:57] node "auto-610916" has "Ready":"False" status (will retry)
	W1228 06:57:47.457801  270987 node_ready.go:57] node "auto-610916" has "Ready":"False" status (will retry)
	I1228 06:57:48.285477  278228 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1228 06:57:48.285727  278228 start.go:159] libmachine.API.Create for "kindnet-610916" (driver="docker")
	I1228 06:57:48.285762  278228 client.go:173] LocalClient.Create starting
	I1228 06:57:48.285841  278228 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22352-5550/.minikube/certs/ca.pem
	I1228 06:57:48.285884  278228 main.go:144] libmachine: Decoding PEM data...
	I1228 06:57:48.285904  278228 main.go:144] libmachine: Parsing certificate...
	I1228 06:57:48.285963  278228 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22352-5550/.minikube/certs/cert.pem
	I1228 06:57:48.285990  278228 main.go:144] libmachine: Decoding PEM data...
	I1228 06:57:48.286003  278228 main.go:144] libmachine: Parsing certificate...
	I1228 06:57:48.286341  278228 cli_runner.go:164] Run: docker network inspect kindnet-610916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1228 06:57:48.302864  278228 cli_runner.go:211] docker network inspect kindnet-610916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1228 06:57:48.302927  278228 network_create.go:284] running [docker network inspect kindnet-610916] to gather additional debugging logs...
	I1228 06:57:48.302943  278228 cli_runner.go:164] Run: docker network inspect kindnet-610916
	W1228 06:57:48.319307  278228 cli_runner.go:211] docker network inspect kindnet-610916 returned with exit code 1
	I1228 06:57:48.319353  278228 network_create.go:287] error running [docker network inspect kindnet-610916]: docker network inspect kindnet-610916: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-610916 not found
	I1228 06:57:48.319372  278228 network_create.go:289] output of [docker network inspect kindnet-610916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-610916 not found
	
	** /stderr **
	I1228 06:57:48.319446  278228 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 06:57:48.336850  278228 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-83d3c063481b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ca:56:51:df:60:88} reservation:<nil>}
	I1228 06:57:48.337829  278228 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-94477def059b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:5a:82:84:46:ba:6c} reservation:<nil>}
	I1228 06:57:48.338770  278228 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-76f4b09d664b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9a:e7:39:af:62:68} reservation:<nil>}
	I1228 06:57:48.339438  278228 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4435fbd1d5af IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:56:c5:3b:23:f3:bc} reservation:<nil>}
	I1228 06:57:48.340434  278228 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eb5ad0}
	I1228 06:57:48.340461  278228 network_create.go:124] attempt to create docker network kindnet-610916 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1228 06:57:48.340506  278228 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-610916 kindnet-610916
	I1228 06:57:48.395723  278228 network_create.go:108] docker network kindnet-610916 192.168.85.0/24 created
	I1228 06:57:48.395757  278228 kic.go:121] calculated static IP "192.168.85.2" for the "kindnet-610916" container
	I1228 06:57:48.395824  278228 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1228 06:57:48.417714  278228 cli_runner.go:164] Run: docker volume create kindnet-610916 --label name.minikube.sigs.k8s.io=kindnet-610916 --label created_by.minikube.sigs.k8s.io=true
	I1228 06:57:48.439257  278228 oci.go:103] Successfully created a docker volume kindnet-610916
	I1228 06:57:48.439339  278228 cli_runner.go:164] Run: docker run --rm --name kindnet-610916-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-610916 --entrypoint /usr/bin/test -v kindnet-610916:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -d /var/lib
	I1228 06:57:48.836799  278228 oci.go:107] Successfully prepared a docker volume kindnet-610916
	I1228 06:57:48.836882  278228 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1228 06:57:48.836910  278228 kic.go:194] Starting extracting preloaded images to volume ...
	I1228 06:57:48.836980  278228 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22352-5550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-610916:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1228 06:57:52.712727  278228 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22352-5550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-610916:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.875692449s)
	I1228 06:57:52.712770  278228 kic.go:203] duration metric: took 3.875856935s to extract preloaded images to volume ...
	W1228 06:57:52.712882  278228 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1228 06:57:52.712922  278228 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1228 06:57:52.712974  278228 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1228 06:57:52.777238  278228 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-610916 --name kindnet-610916 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-610916 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-610916 --network kindnet-610916 --ip 192.168.85.2 --volume kindnet-610916:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1
	I1228 06:57:53.076304  278228 cli_runner.go:164] Run: docker container inspect kindnet-610916 --format={{.State.Running}}
	W1228 06:57:49.458420  270987 node_ready.go:57] node "auto-610916" has "Ready":"False" status (will retry)
	W1228 06:57:51.957779  270987 node_ready.go:57] node "auto-610916" has "Ready":"False" status (will retry)
	W1228 06:57:53.957965  270987 node_ready.go:57] node "auto-610916" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Dec 28 06:57:19 embed-certs-422591 crio[569]: time="2025-12-28T06:57:19.149907901Z" level=info msg="Started container" PID=1784 containerID=53c5f983861efda1e9c539c2ebaae1c817f5e97e6c506a16f6369c9a2193caac description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-cltr8/dashboard-metrics-scraper id=fa3aa926-0c30-4cd6-8a49-46a3ff7023cb name=/runtime.v1.RuntimeService/StartContainer sandboxID=0d8e55582aaded9a94c910f23feb0845d7c84c913e8550cc713cbbebad2cb273
	Dec 28 06:57:19 embed-certs-422591 crio[569]: time="2025-12-28T06:57:19.206188609Z" level=info msg="Removing container: 77a5e9c6df76701aa86457d72d0b8ed7235fee8620f867a0542a8026e0567549" id=7bf15b82-100b-47e7-8189-fcb6d4c50f72 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 28 06:57:19 embed-certs-422591 crio[569]: time="2025-12-28T06:57:19.215731685Z" level=info msg="Removed container 77a5e9c6df76701aa86457d72d0b8ed7235fee8620f867a0542a8026e0567549: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-cltr8/dashboard-metrics-scraper" id=7bf15b82-100b-47e7-8189-fcb6d4c50f72 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 28 06:57:30 embed-certs-422591 crio[569]: time="2025-12-28T06:57:30.236497491Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=4a7073f8-e3d5-471b-a79b-778baae97f60 name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:57:30 embed-certs-422591 crio[569]: time="2025-12-28T06:57:30.237552667Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=5aaea0cd-c3be-44de-991e-ecfccc20fd3a name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:57:30 embed-certs-422591 crio[569]: time="2025-12-28T06:57:30.238622539Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=6c9cf3ea-7e3b-4917-9e32-cd77bc153ffe name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:57:30 embed-certs-422591 crio[569]: time="2025-12-28T06:57:30.238785626Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:30 embed-certs-422591 crio[569]: time="2025-12-28T06:57:30.243946483Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:30 embed-certs-422591 crio[569]: time="2025-12-28T06:57:30.244177667Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/edeefaf0b66864667b2cf5309848f4de1ffd7662a94ff697a5c7c1b0255b7b13/merged/etc/passwd: no such file or directory"
	Dec 28 06:57:30 embed-certs-422591 crio[569]: time="2025-12-28T06:57:30.244218312Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/edeefaf0b66864667b2cf5309848f4de1ffd7662a94ff697a5c7c1b0255b7b13/merged/etc/group: no such file or directory"
	Dec 28 06:57:30 embed-certs-422591 crio[569]: time="2025-12-28T06:57:30.245722098Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:30 embed-certs-422591 crio[569]: time="2025-12-28T06:57:30.281491891Z" level=info msg="Created container 33848c1c8d16f600b535f0d6444ca5211eb608b0a2a9d27f2c59d60af9883a3d: kube-system/storage-provisioner/storage-provisioner" id=6c9cf3ea-7e3b-4917-9e32-cd77bc153ffe name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:57:30 embed-certs-422591 crio[569]: time="2025-12-28T06:57:30.28217866Z" level=info msg="Starting container: 33848c1c8d16f600b535f0d6444ca5211eb608b0a2a9d27f2c59d60af9883a3d" id=f839828e-ba04-41c6-89f5-7c8a568c1e47 name=/runtime.v1.RuntimeService/StartContainer
	Dec 28 06:57:30 embed-certs-422591 crio[569]: time="2025-12-28T06:57:30.283807362Z" level=info msg="Started container" PID=1799 containerID=33848c1c8d16f600b535f0d6444ca5211eb608b0a2a9d27f2c59d60af9883a3d description=kube-system/storage-provisioner/storage-provisioner id=f839828e-ba04-41c6-89f5-7c8a568c1e47 name=/runtime.v1.RuntimeService/StartContainer sandboxID=678d8c12d440e697d053b48e1ab524234b9cb8247d07958dd2429a4b6e5c3ac5
	Dec 28 06:57:44 embed-certs-422591 crio[569]: time="2025-12-28T06:57:44.106325743Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=cb27ba15-82fe-4c0c-9917-ab98849b8e4b name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:57:44 embed-certs-422591 crio[569]: time="2025-12-28T06:57:44.107451416Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9edec0b9-8855-4bfd-a6a9-efce7f862e66 name=/runtime.v1.ImageService/ImageStatus
	Dec 28 06:57:44 embed-certs-422591 crio[569]: time="2025-12-28T06:57:44.108536638Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-cltr8/dashboard-metrics-scraper" id=f44287be-e5fe-45e8-aad4-6cee65728e99 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:57:44 embed-certs-422591 crio[569]: time="2025-12-28T06:57:44.108677868Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:44 embed-certs-422591 crio[569]: time="2025-12-28T06:57:44.115883989Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:44 embed-certs-422591 crio[569]: time="2025-12-28T06:57:44.116703256Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 28 06:57:44 embed-certs-422591 crio[569]: time="2025-12-28T06:57:44.148904852Z" level=info msg="Created container c331b6bc0e55d87e2a7e723a00d12efb8fe0658e69f093be34fc3b00cfe50132: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-cltr8/dashboard-metrics-scraper" id=f44287be-e5fe-45e8-aad4-6cee65728e99 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 28 06:57:44 embed-certs-422591 crio[569]: time="2025-12-28T06:57:44.149659276Z" level=info msg="Starting container: c331b6bc0e55d87e2a7e723a00d12efb8fe0658e69f093be34fc3b00cfe50132" id=c231338e-4e86-444e-ad85-83ed3156a4eb name=/runtime.v1.RuntimeService/StartContainer
	Dec 28 06:57:44 embed-certs-422591 crio[569]: time="2025-12-28T06:57:44.15190901Z" level=info msg="Started container" PID=1838 containerID=c331b6bc0e55d87e2a7e723a00d12efb8fe0658e69f093be34fc3b00cfe50132 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-cltr8/dashboard-metrics-scraper id=c231338e-4e86-444e-ad85-83ed3156a4eb name=/runtime.v1.RuntimeService/StartContainer sandboxID=0d8e55582aaded9a94c910f23feb0845d7c84c913e8550cc713cbbebad2cb273
	Dec 28 06:57:44 embed-certs-422591 crio[569]: time="2025-12-28T06:57:44.273998645Z" level=info msg="Removing container: 53c5f983861efda1e9c539c2ebaae1c817f5e97e6c506a16f6369c9a2193caac" id=16ebedd8-4ca6-4264-8d13-99a3e8e23b17 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 28 06:57:44 embed-certs-422591 crio[569]: time="2025-12-28T06:57:44.284533034Z" level=info msg="Removed container 53c5f983861efda1e9c539c2ebaae1c817f5e97e6c506a16f6369c9a2193caac: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-cltr8/dashboard-metrics-scraper" id=16ebedd8-4ca6-4264-8d13-99a3e8e23b17 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	c331b6bc0e55d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           12 seconds ago       Exited              dashboard-metrics-scraper   3                   0d8e55582aade       dashboard-metrics-scraper-867fb5f87b-cltr8   kubernetes-dashboard
	33848c1c8d16f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           26 seconds ago       Running             storage-provisioner         2                   678d8c12d440e       storage-provisioner                          kube-system
	d6749648effaf       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   48 seconds ago       Running             kubernetes-dashboard        0                   b62e73724f676       kubernetes-dashboard-b84665fb8-h42vt         kubernetes-dashboard
	9a99e6688d733       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           57 seconds ago       Running             coredns                     1                   0c9bf13239f0e       coredns-7d764666f9-dmhdv                     kube-system
	8b14082c26eaf       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           57 seconds ago       Running             busybox                     1                   ae09eac9878d0       busybox                                      default
	05f6fefa1a4cc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           57 seconds ago       Exited              storage-provisioner         1                   678d8c12d440e       storage-provisioner                          kube-system
	4967cdbb89308       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           57 seconds ago       Running             kindnet-cni                 1                   7da110f48bd98       kindnet-9zxtp                                kube-system
	0317791c507ee       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                           57 seconds ago       Running             kube-proxy                  1                   ef24192e0817a       kube-proxy-j2dkd                             kube-system
	bf17ccdbddfbe       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                           About a minute ago   Running             kube-apiserver              1                   565d6159f70ae       kube-apiserver-embed-certs-422591            kube-system
	06e9a5f4cf462       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                           About a minute ago   Running             kube-scheduler              1                   f44a1c2383659       kube-scheduler-embed-certs-422591            kube-system
	ee96a2302387d       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           About a minute ago   Running             etcd                        1                   7e044ef83d53e       etcd-embed-certs-422591                      kube-system
	eff3bdbb5d917       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                           About a minute ago   Running             kube-controller-manager     1                   956cc8fbc6072       kube-controller-manager-embed-certs-422591   kube-system
	
	
	==> describe nodes <==
	Name:               embed-certs-422591
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-422591
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba
	                    minikube.k8s.io/name=embed-certs-422591
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_28T06_56_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Dec 2025 06:56:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-422591
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 28 Dec 2025 06:57:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 28 Dec 2025 06:57:29 +0000   Sun, 28 Dec 2025 06:56:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 28 Dec 2025 06:57:29 +0000   Sun, 28 Dec 2025 06:56:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 28 Dec 2025 06:57:29 +0000   Sun, 28 Dec 2025 06:56:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 28 Dec 2025 06:57:29 +0000   Sun, 28 Dec 2025 06:56:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-422591
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 493159aea3d8b8768b108b926950835d
	  System UUID:                8e5f32a2-4590-4e27-9bc4-b0131e49535f
	  Boot ID:                    e7a1d175-ccf2-4135-b9c7-3a9f70f4c4af
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-7d764666f9-dmhdv                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     106s
	  kube-system                 etcd-embed-certs-422591                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         111s
	  kube-system                 kindnet-9zxtp                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-embed-certs-422591             250m (3%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-embed-certs-422591    200m (2%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-j2dkd                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-embed-certs-422591             100m (1%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-cltr8    0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-h42vt          0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  107s  node-controller  Node embed-certs-422591 event: Registered Node embed-certs-422591 in Controller
	  Normal  RegisteredNode  55s   node-controller  Node embed-certs-422591 event: Registered Node embed-certs-422591 in Controller
	
	
	==> dmesg <==
	[Dec28 06:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001811] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000998] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.386099] i8042: Warning: Keylock active
	[  +0.010472] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.485785] block sda: the capability attribute has been deprecated.
	[  +0.082391] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024584] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.071522] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 06:57:56 up 40 min,  0 user,  load average: 4.68, 3.30, 2.02
	Linux embed-certs-422591 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 28 06:57:13 embed-certs-422591 kubelet[733]: E1228 06:57:13.186480     733 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-embed-certs-422591" containerName="kube-apiserver"
	Dec 28 06:57:19 embed-certs-422591 kubelet[733]: E1228 06:57:19.105711     733 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-cltr8" containerName="dashboard-metrics-scraper"
	Dec 28 06:57:19 embed-certs-422591 kubelet[733]: I1228 06:57:19.105747     733 scope.go:122] "RemoveContainer" containerID="77a5e9c6df76701aa86457d72d0b8ed7235fee8620f867a0542a8026e0567549"
	Dec 28 06:57:19 embed-certs-422591 kubelet[733]: I1228 06:57:19.204885     733 scope.go:122] "RemoveContainer" containerID="77a5e9c6df76701aa86457d72d0b8ed7235fee8620f867a0542a8026e0567549"
	Dec 28 06:57:19 embed-certs-422591 kubelet[733]: E1228 06:57:19.205110     733 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-cltr8" containerName="dashboard-metrics-scraper"
	Dec 28 06:57:19 embed-certs-422591 kubelet[733]: I1228 06:57:19.205146     733 scope.go:122] "RemoveContainer" containerID="53c5f983861efda1e9c539c2ebaae1c817f5e97e6c506a16f6369c9a2193caac"
	Dec 28 06:57:19 embed-certs-422591 kubelet[733]: E1228 06:57:19.205349     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-cltr8_kubernetes-dashboard(b545257e-e7ac-4504-a190-f74f706b2d14)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-cltr8" podUID="b545257e-e7ac-4504-a190-f74f706b2d14"
	Dec 28 06:57:28 embed-certs-422591 kubelet[733]: E1228 06:57:28.379232     733 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-cltr8" containerName="dashboard-metrics-scraper"
	Dec 28 06:57:28 embed-certs-422591 kubelet[733]: I1228 06:57:28.379290     733 scope.go:122] "RemoveContainer" containerID="53c5f983861efda1e9c539c2ebaae1c817f5e97e6c506a16f6369c9a2193caac"
	Dec 28 06:57:28 embed-certs-422591 kubelet[733]: E1228 06:57:28.379547     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-cltr8_kubernetes-dashboard(b545257e-e7ac-4504-a190-f74f706b2d14)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-cltr8" podUID="b545257e-e7ac-4504-a190-f74f706b2d14"
	Dec 28 06:57:30 embed-certs-422591 kubelet[733]: I1228 06:57:30.236055     733 scope.go:122] "RemoveContainer" containerID="05f6fefa1a4cc0a1d42feee46f8d516af845351bd5e66cb6e37345b07efb156b"
	Dec 28 06:57:37 embed-certs-422591 kubelet[733]: E1228 06:57:37.838218     733 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-dmhdv" containerName="coredns"
	Dec 28 06:57:44 embed-certs-422591 kubelet[733]: E1228 06:57:44.105487     733 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-cltr8" containerName="dashboard-metrics-scraper"
	Dec 28 06:57:44 embed-certs-422591 kubelet[733]: I1228 06:57:44.105532     733 scope.go:122] "RemoveContainer" containerID="53c5f983861efda1e9c539c2ebaae1c817f5e97e6c506a16f6369c9a2193caac"
	Dec 28 06:57:44 embed-certs-422591 kubelet[733]: I1228 06:57:44.271949     733 scope.go:122] "RemoveContainer" containerID="53c5f983861efda1e9c539c2ebaae1c817f5e97e6c506a16f6369c9a2193caac"
	Dec 28 06:57:44 embed-certs-422591 kubelet[733]: E1228 06:57:44.272220     733 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-cltr8" containerName="dashboard-metrics-scraper"
	Dec 28 06:57:44 embed-certs-422591 kubelet[733]: I1228 06:57:44.272258     733 scope.go:122] "RemoveContainer" containerID="c331b6bc0e55d87e2a7e723a00d12efb8fe0658e69f093be34fc3b00cfe50132"
	Dec 28 06:57:44 embed-certs-422591 kubelet[733]: E1228 06:57:44.272473     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-cltr8_kubernetes-dashboard(b545257e-e7ac-4504-a190-f74f706b2d14)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-cltr8" podUID="b545257e-e7ac-4504-a190-f74f706b2d14"
	Dec 28 06:57:48 embed-certs-422591 kubelet[733]: E1228 06:57:48.378829     733 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-cltr8" containerName="dashboard-metrics-scraper"
	Dec 28 06:57:48 embed-certs-422591 kubelet[733]: I1228 06:57:48.378877     733 scope.go:122] "RemoveContainer" containerID="c331b6bc0e55d87e2a7e723a00d12efb8fe0658e69f093be34fc3b00cfe50132"
	Dec 28 06:57:48 embed-certs-422591 kubelet[733]: E1228 06:57:48.379147     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-cltr8_kubernetes-dashboard(b545257e-e7ac-4504-a190-f74f706b2d14)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-cltr8" podUID="b545257e-e7ac-4504-a190-f74f706b2d14"
	Dec 28 06:57:52 embed-certs-422591 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 28 06:57:52 embed-certs-422591 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 28 06:57:52 embed-certs-422591 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 06:57:52 embed-certs-422591 systemd[1]: kubelet.service: Consumed 1.892s CPU time.
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1228 06:57:56.115459  281522 logs.go:279] Failed to list containers for "kube-apiserver": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:56Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:56.187729  281522 logs.go:279] Failed to list containers for "etcd": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:56Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:56.249968  281522 logs.go:279] Failed to list containers for "coredns": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:56Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:56.318127  281522 logs.go:279] Failed to list containers for "kube-scheduler": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:56Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:56.385603  281522 logs.go:279] Failed to list containers for "kube-proxy": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:56Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:56.455272  281522 logs.go:279] Failed to list containers for "kube-controller-manager": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:56Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:56.527732  281522 logs.go:279] Failed to list containers for "kindnet": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:56Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:56.593727  281522 logs.go:279] Failed to list containers for "storage-provisioner": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:56Z" level=error msg="open /run/runc: no such file or directory"
	E1228 06:57:56.672709  281522 logs.go:279] Failed to list containers for "kubernetes-dashboard": runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:57:56Z" level=error msg="open /run/runc: no such file or directory"

                                                
                                                
** /stderr **
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-422591 -n embed-certs-422591
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-422591 -n embed-certs-422591: exit status 2 (354.547031ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-422591 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (5.93s)

                                                
                                    

Test pass (279/332)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 4.69
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.35.0/json-events 2.71
13 TestDownloadOnly/v1.35.0/preload-exists 0
17 TestDownloadOnly/v1.35.0/LogsDuration 0.07
18 TestDownloadOnly/v1.35.0/DeleteAll 0.22
19 TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 0.4
21 TestBinaryMirror 0.79
22 TestOffline 52.56
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 89.61
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 7.41
48 TestAddons/StoppedEnableDisable 12.43
49 TestCertOptions 23.92
50 TestCertExpiration 209.95
52 TestForceSystemdFlag 25.27
53 TestForceSystemdEnv 38.03
58 TestErrorSpam/setup 19.01
59 TestErrorSpam/start 0.66
60 TestErrorSpam/status 0.94
61 TestErrorSpam/pause 6.15
62 TestErrorSpam/unpause 5.55
63 TestErrorSpam/stop 12.18
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 35.17
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 6.01
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.06
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.47
75 TestFunctional/serial/CacheCmd/cache/add_local 0.88
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.48
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 28.68
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 0.93
86 TestFunctional/serial/LogsFileCmd 0.95
87 TestFunctional/serial/InvalidService 3.59
89 TestFunctional/parallel/ConfigCmd 0.44
90 TestFunctional/parallel/DashboardCmd 28.31
91 TestFunctional/parallel/DryRun 0.43
92 TestFunctional/parallel/InternationalLanguage 0.17
93 TestFunctional/parallel/StatusCmd 1.09
97 TestFunctional/parallel/ServiceCmdConnect 16.53
98 TestFunctional/parallel/AddonsCmd 0.15
99 TestFunctional/parallel/PersistentVolumeClaim 32.84
101 TestFunctional/parallel/SSHCmd 0.64
102 TestFunctional/parallel/CpCmd 1.62
103 TestFunctional/parallel/MySQL 20.73
104 TestFunctional/parallel/FileSync 0.32
105 TestFunctional/parallel/CertSync 1.94
109 TestFunctional/parallel/NodeLabels 0.08
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.58
113 TestFunctional/parallel/License 0.32
114 TestFunctional/parallel/MountCmd/any-port 11.74
115 TestFunctional/parallel/MountCmd/specific-port 1.72
116 TestFunctional/parallel/MountCmd/VerifyCleanup 1.44
117 TestFunctional/parallel/ServiceCmd/DeployApp 8.13
118 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
119 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
120 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
121 TestFunctional/parallel/ServiceCmd/List 1.48
122 TestFunctional/parallel/ServiceCmd/JSONOutput 1.36
123 TestFunctional/parallel/ServiceCmd/HTTPS 0.43
124 TestFunctional/parallel/ServiceCmd/Format 0.47
125 TestFunctional/parallel/ServiceCmd/URL 0.5
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.52
127 TestFunctional/parallel/ProfileCmd/profile_list 0.49
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.57
129 TestFunctional/parallel/Version/short 0.06
130 TestFunctional/parallel/Version/components 0.49
132 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.48
133 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 6.21
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
140 TestFunctional/parallel/ImageCommands/ImageBuild 2.73
141 TestFunctional/parallel/ImageCommands/Setup 0.33
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.09
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.83
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.01
145 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.34
146 TestFunctional/parallel/ImageCommands/ImageRemove 0.47
147 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.67
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.43
149 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
150 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
154 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 87.64
163 TestMultiControlPlane/serial/DeployApp 4.3
164 TestMultiControlPlane/serial/PingHostFromPods 0.98
165 TestMultiControlPlane/serial/AddWorkerNode 27.42
166 TestMultiControlPlane/serial/NodeLabels 0.07
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.88
168 TestMultiControlPlane/serial/CopyFile 16.44
169 TestMultiControlPlane/serial/StopSecondaryNode 12.71
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.7
171 TestMultiControlPlane/serial/RestartSecondaryNode 8.29
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.88
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 95.76
174 TestMultiControlPlane/serial/DeleteSecondaryNode 11.04
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.7
176 TestMultiControlPlane/serial/StopCluster 36.25
177 TestMultiControlPlane/serial/RestartCluster 55.98
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.67
179 TestMultiControlPlane/serial/AddSecondaryNode 28.4
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.89
185 TestJSONOutput/start/Command 36.47
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 12.04
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.22
210 TestKicCustomNetwork/create_custom_network 24.03
211 TestKicCustomNetwork/use_default_bridge_network 19.4
212 TestKicExistingNetwork 19.92
213 TestKicCustomSubnet 22.73
214 TestKicStaticIP 20.61
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 42.06
219 TestMountStart/serial/StartWithMountFirst 4.82
220 TestMountStart/serial/VerifyMountFirst 0.26
221 TestMountStart/serial/StartWithMountSecond 7.6
222 TestMountStart/serial/VerifyMountSecond 0.26
223 TestMountStart/serial/DeleteFirst 1.7
224 TestMountStart/serial/VerifyMountPostDelete 0.26
225 TestMountStart/serial/Stop 1.25
226 TestMountStart/serial/RestartStopped 7.04
227 TestMountStart/serial/VerifyMountPostStop 0.26
230 TestMultiNode/serial/FreshStart2Nodes 60.97
231 TestMultiNode/serial/DeployApp2Nodes 3.36
232 TestMultiNode/serial/PingHostFrom2Pods 0.69
233 TestMultiNode/serial/AddNode 23.81
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.64
236 TestMultiNode/serial/CopyFile 9.37
237 TestMultiNode/serial/StopNode 2.25
238 TestMultiNode/serial/StartAfterStop 6.97
239 TestMultiNode/serial/RestartKeepsNodes 69.81
240 TestMultiNode/serial/DeleteNode 6.02
241 TestMultiNode/serial/StopMultiNode 24.11
242 TestMultiNode/serial/RestartMultiNode 51.62
243 TestMultiNode/serial/ValidateNameConflict 22.83
250 TestScheduledStopUnix 97.07
253 TestInsufficientStorage 8.68
254 TestRunningBinaryUpgrade 47.35
256 TestKubernetesUpgrade 80.82
257 TestMissingContainerUpgrade 67.73
259 TestPause/serial/Start 54.43
260 TestStoppedBinaryUpgrade/Setup 0.77
261 TestStoppedBinaryUpgrade/Upgrade 308.62
262 TestPause/serial/SecondStartNoReconfiguration 7.59
272 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
273 TestNoKubernetes/serial/StartWithK8s 20.31
274 TestNoKubernetes/serial/StartWithStopK8s 5.73
275 TestNoKubernetes/serial/Start 7.01
279 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
280 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
281 TestNoKubernetes/serial/ProfileList 17.28
286 TestNetworkPlugins/group/false 3.44
290 TestNoKubernetes/serial/Stop 1.28
291 TestNoKubernetes/serial/StartNoArgs 6.45
292 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
293 TestPreload/Start-NoPreload-PullImage 61.3
295 TestStartStop/group/old-k8s-version/serial/FirstStart 51.49
296 TestPreload/Restart-With-Preload-Check-User-Image 46.03
298 TestStartStop/group/no-preload/serial/FirstStart 43.72
299 TestStartStop/group/old-k8s-version/serial/DeployApp 8.25
301 TestStartStop/group/old-k8s-version/serial/Stop 12.23
302 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
303 TestStartStop/group/old-k8s-version/serial/SecondStart 49.56
305 TestStartStop/group/no-preload/serial/DeployApp 9.27
307 TestStartStop/group/embed-certs/serial/FirstStart 42.18
308 TestStoppedBinaryUpgrade/MinikubeLogs 2.15
310 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 37.89
312 TestStartStop/group/no-preload/serial/Stop 13.47
313 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.31
314 TestStartStop/group/no-preload/serial/SecondStart 48.45
315 TestStartStop/group/embed-certs/serial/DeployApp 8.26
316 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 7.26
317 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
319 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
320 TestStartStop/group/embed-certs/serial/Stop 12.27
322 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.07
323 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
325 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
326 TestStartStop/group/embed-certs/serial/SecondStart 50.95
328 TestStartStop/group/newest-cni/serial/FirstStart 22.58
329 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
330 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 48.23
331 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
332 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
333 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.3
335 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/Stop 12.08
338 TestNetworkPlugins/group/auto/Start 39.23
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
340 TestStartStop/group/newest-cni/serial/SecondStart 10.97
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
345 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
346 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
347 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
348 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
349 TestNetworkPlugins/group/kindnet/Start 44.74
350 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
352 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.36
354 TestNetworkPlugins/group/auto/KubeletFlags 0.31
355 TestNetworkPlugins/group/auto/NetCatPod 8.2
356 TestNetworkPlugins/group/calico/Start 49.59
357 TestNetworkPlugins/group/custom-flannel/Start 41.38
358 TestNetworkPlugins/group/auto/DNS 0.12
359 TestNetworkPlugins/group/auto/Localhost 0.11
360 TestNetworkPlugins/group/auto/HairPin 0.1
361 TestNetworkPlugins/group/enable-default-cni/Start 61.06
362 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
363 TestNetworkPlugins/group/kindnet/KubeletFlags 0.34
364 TestNetworkPlugins/group/kindnet/NetCatPod 8.21
365 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.28
366 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.35
367 TestNetworkPlugins/group/kindnet/DNS 0.12
368 TestNetworkPlugins/group/kindnet/Localhost 0.1
369 TestNetworkPlugins/group/kindnet/HairPin 0.1
370 TestNetworkPlugins/group/calico/ControllerPod 6.01
371 TestNetworkPlugins/group/custom-flannel/DNS 0.11
372 TestNetworkPlugins/group/custom-flannel/Localhost 0.09
373 TestNetworkPlugins/group/custom-flannel/HairPin 0.08
374 TestNetworkPlugins/group/calico/KubeletFlags 0.33
375 TestNetworkPlugins/group/calico/NetCatPod 8.23
376 TestNetworkPlugins/group/calico/DNS 0.14
377 TestNetworkPlugins/group/calico/Localhost 0.11
378 TestNetworkPlugins/group/calico/HairPin 0.12
379 TestNetworkPlugins/group/flannel/Start 44.14
380 TestNetworkPlugins/group/bridge/Start 66.07
381 TestPreload/PreloadSrc/gcs 3.87
382 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.36
383 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.58
384 TestPreload/PreloadSrc/github 4.32
385 TestPreload/PreloadSrc/gcs-cached 0.58
386 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
387 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
388 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
389 TestNetworkPlugins/group/flannel/ControllerPod 6.01
390 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
391 TestNetworkPlugins/group/flannel/NetCatPod 8.17
392 TestNetworkPlugins/group/flannel/DNS 0.11
393 TestNetworkPlugins/group/flannel/Localhost 0.08
394 TestNetworkPlugins/group/flannel/HairPin 0.09
395 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
396 TestNetworkPlugins/group/bridge/NetCatPod 9.19
397 TestNetworkPlugins/group/bridge/DNS 0.1
398 TestNetworkPlugins/group/bridge/Localhost 0.08
399 TestNetworkPlugins/group/bridge/HairPin 0.08
x
+
TestDownloadOnly/v1.28.0/json-events (4.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-239257 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-239257 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.694625245s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (4.69s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1228 06:27:37.446313    9076 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1228 06:27:37.446393    9076 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22352-5550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-239257
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-239257: exit status 85 (78.831938ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-239257 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-239257 │ jenkins │ v1.37.0 │ 28 Dec 25 06:27 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 06:27:32
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 06:27:32.803722    9087 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:27:32.803974    9087 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:27:32.803985    9087 out.go:374] Setting ErrFile to fd 2...
	I1228 06:27:32.803989    9087 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:27:32.804215    9087 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	W1228 06:27:32.804340    9087 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22352-5550/.minikube/config/config.json: open /home/jenkins/minikube-integration/22352-5550/.minikube/config/config.json: no such file or directory
	I1228 06:27:32.804799    9087 out.go:368] Setting JSON to true
	I1228 06:27:32.805634    9087 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":605,"bootTime":1766902648,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1228 06:27:32.805685    9087 start.go:143] virtualization: kvm guest
	I1228 06:27:32.809559    9087 out.go:99] [download-only-239257] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1228 06:27:32.809665    9087 preload.go:372] Failed to list preload files: open /home/jenkins/minikube-integration/22352-5550/.minikube/cache/preloaded-tarball: no such file or directory
	I1228 06:27:32.809741    9087 notify.go:221] Checking for updates...
	I1228 06:27:32.811107    9087 out.go:171] MINIKUBE_LOCATION=22352
	I1228 06:27:32.812413    9087 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 06:27:32.813743    9087 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:27:32.815065    9087 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-5550/.minikube
	I1228 06:27:32.816232    9087 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1228 06:27:32.818430    9087 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1228 06:27:32.818656    9087 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 06:27:32.843925    9087 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1228 06:27:32.844055    9087 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:27:33.062300    9087 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-28 06:27:33.052671338 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:27:33.062428    9087 docker.go:319] overlay module found
	I1228 06:27:33.063908    9087 out.go:99] Using the docker driver based on user configuration
	I1228 06:27:33.063938    9087 start.go:309] selected driver: docker
	I1228 06:27:33.063945    9087 start.go:928] validating driver "docker" against <nil>
	I1228 06:27:33.064043    9087 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:27:33.120467    9087 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-28 06:27:33.111562833 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:27:33.120647    9087 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1228 06:27:33.121475    9087 start_flags.go:417] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1228 06:27:33.121687    9087 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1228 06:27:33.123322    9087 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-239257 host does not exist
	  To start a cluster, run: "minikube start -p download-only-239257"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-239257
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/json-events (2.71s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-337184 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-337184 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (2.712377473s)
--- PASS: TestDownloadOnly/v1.35.0/json-events (2.71s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/preload-exists
I1228 06:27:40.612171    9076 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
I1228 06:27:40.612218    9076 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22352-5550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-337184
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-337184: exit status 85 (71.029755ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-239257 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-239257 │ jenkins │ v1.37.0 │ 28 Dec 25 06:27 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 28 Dec 25 06:27 UTC │ 28 Dec 25 06:27 UTC │
	│ delete  │ -p download-only-239257                                                                                                                                                   │ download-only-239257 │ jenkins │ v1.37.0 │ 28 Dec 25 06:27 UTC │ 28 Dec 25 06:27 UTC │
	│ start   │ -o=json --download-only -p download-only-337184 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-337184 │ jenkins │ v1.37.0 │ 28 Dec 25 06:27 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 06:27:37
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 06:27:37.951189    9454 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:27:37.951449    9454 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:27:37.951460    9454 out.go:374] Setting ErrFile to fd 2...
	I1228 06:27:37.951466    9454 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:27:37.951654    9454 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:27:37.952167    9454 out.go:368] Setting JSON to true
	I1228 06:27:37.953042    9454 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":610,"bootTime":1766902648,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1228 06:27:37.953105    9454 start.go:143] virtualization: kvm guest
	I1228 06:27:37.955041    9454 out.go:99] [download-only-337184] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1228 06:27:37.955185    9454 notify.go:221] Checking for updates...
	I1228 06:27:37.956475    9454 out.go:171] MINIKUBE_LOCATION=22352
	I1228 06:27:37.957771    9454 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 06:27:37.959264    9454 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:27:37.960640    9454 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-5550/.minikube
	I1228 06:27:37.961998    9454 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1228 06:27:37.964184    9454 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1228 06:27:37.964406    9454 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 06:27:37.988038    9454 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1228 06:27:37.988113    9454 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:27:38.042409    9454 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-28 06:27:38.033266183 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:27:38.042537    9454 docker.go:319] overlay module found
	I1228 06:27:38.044229    9454 out.go:99] Using the docker driver based on user configuration
	I1228 06:27:38.044264    9454 start.go:309] selected driver: docker
	I1228 06:27:38.044272    9454 start.go:928] validating driver "docker" against <nil>
	I1228 06:27:38.044361    9454 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:27:38.099777    9454 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-28 06:27:38.089926317 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:27:38.099917    9454 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1228 06:27:38.100385    9454 start_flags.go:417] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1228 06:27:38.100513    9454 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1228 06:27:38.102156    9454 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-337184 host does not exist
	  To start a cluster, run: "minikube start -p download-only-337184"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-337184
--- PASS: TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.4s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-744245 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "download-docker-744245" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-744245
--- PASS: TestDownloadOnlyKic (0.40s)

                                                
                                    
x
+
TestBinaryMirror (0.79s)

                                                
                                                
=== RUN   TestBinaryMirror
I1228 06:27:41.721968    9076 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-452060 --alsologtostderr --binary-mirror http://127.0.0.1:37845 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-452060" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-452060
--- PASS: TestBinaryMirror (0.79s)

                                                
                                    
x
+
TestOffline (52.56s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-376432 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-376432 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (49.943325257s)
helpers_test.go:176: Cleaning up "offline-crio-376432" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-376432
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-376432: (2.619532594s)
--- PASS: TestOffline (52.56s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-614829
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-614829: exit status 85 (60.941423ms)

                                                
                                                
-- stdout --
	* Profile "addons-614829" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-614829"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-614829
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-614829: exit status 85 (60.799026ms)

                                                
                                                
-- stdout --
	* Profile "addons-614829" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-614829"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (89.61s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-614829 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-614829 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (1m29.607210739s)
--- PASS: TestAddons/Setup (89.61s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-614829 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-614829 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (7.41s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-614829 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-614829 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [332c81f3-8d79-40b0-b4ce-5e026e0ac87d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [332c81f3-8d79-40b0-b4ce-5e026e0ac87d] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 7.003037436s
addons_test.go:696: (dbg) Run:  kubectl --context addons-614829 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-614829 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-614829 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (7.41s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.43s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-614829
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-614829: (12.151283456s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-614829
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-614829
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-614829
--- PASS: TestAddons/StoppedEnableDisable (12.43s)

                                                
                                    
x
+
TestCertOptions (23.92s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-943497 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-943497 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (20.844527963s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-943497 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-943497 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-943497 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-943497" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-943497
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-943497: (2.417115328s)
--- PASS: TestCertOptions (23.92s)

                                                
                                    
x
+
TestCertExpiration (209.95s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-623987 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-623987 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (21.984026979s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-623987 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-623987 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (5.482208465s)
helpers_test.go:176: Cleaning up "cert-expiration-623987" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-623987
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-623987: (2.483299091s)
--- PASS: TestCertExpiration (209.95s)

                                                
                                    
x
+
TestForceSystemdFlag (25.27s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-095404 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-095404 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (22.543959197s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-095404 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-095404" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-095404
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-095404: (2.453226068s)
--- PASS: TestForceSystemdFlag (25.27s)

                                                
                                    
x
+
TestForceSystemdEnv (38.03s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-421965 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-421965 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (35.199368126s)
helpers_test.go:176: Cleaning up "force-systemd-env-421965" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-421965
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-421965: (2.829020804s)
--- PASS: TestForceSystemdEnv (38.03s)

                                                
                                    
x
+
TestErrorSpam/setup (19.01s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-012746 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-012746 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-012746 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-012746 --driver=docker  --container-runtime=crio: (19.009666637s)
--- PASS: TestErrorSpam/setup (19.01s)

                                                
                                    
x
+
TestErrorSpam/start (0.66s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-012746 --log_dir /tmp/nospam-012746 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-012746 --log_dir /tmp/nospam-012746 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-012746 --log_dir /tmp/nospam-012746 start --dry-run
--- PASS: TestErrorSpam/start (0.66s)

                                                
                                    
x
+
TestErrorSpam/status (0.94s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-012746 --log_dir /tmp/nospam-012746 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-012746 --log_dir /tmp/nospam-012746 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-012746 --log_dir /tmp/nospam-012746 status
--- PASS: TestErrorSpam/status (0.94s)

                                                
                                    
x
+
TestErrorSpam/pause (6.15s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-012746 --log_dir /tmp/nospam-012746 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-012746 --log_dir /tmp/nospam-012746 pause: exit status 80 (2.233068879s)

                                                
                                                
-- stdout --
	* Pausing node nospam-012746 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:31:00Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-012746 --log_dir /tmp/nospam-012746 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-012746 --log_dir /tmp/nospam-012746 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-012746 --log_dir /tmp/nospam-012746 pause: exit status 80 (2.318058921s)

                                                
                                                
-- stdout --
	* Pausing node nospam-012746 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:31:03Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-012746 --log_dir /tmp/nospam-012746 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-012746 --log_dir /tmp/nospam-012746 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-012746 --log_dir /tmp/nospam-012746 pause: exit status 80 (1.597310364s)

                                                
                                                
-- stdout --
	* Pausing node nospam-012746 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:31:04Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-012746 --log_dir /tmp/nospam-012746 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.15s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.55s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-012746 --log_dir /tmp/nospam-012746 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-012746 --log_dir /tmp/nospam-012746 unpause: exit status 80 (2.005058385s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-012746 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:31:06Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-012746 --log_dir /tmp/nospam-012746 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-012746 --log_dir /tmp/nospam-012746 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-012746 --log_dir /tmp/nospam-012746 unpause: exit status 80 (1.615428791s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-012746 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:31:08Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-012746 --log_dir /tmp/nospam-012746 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-012746 --log_dir /tmp/nospam-012746 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-012746 --log_dir /tmp/nospam-012746 unpause: exit status 80 (1.927880402s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-012746 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc --root /run/runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T06:31:10Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-012746 --log_dir /tmp/nospam-012746 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.55s)

                                                
                                    
x
+
TestErrorSpam/stop (12.18s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-012746 --log_dir /tmp/nospam-012746 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-012746 --log_dir /tmp/nospam-012746 stop: (11.980401937s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-012746 --log_dir /tmp/nospam-012746 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-012746 --log_dir /tmp/nospam-012746 stop
--- PASS: TestErrorSpam/stop (12.18s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1865: local sync path: /home/jenkins/minikube-integration/22352-5550/.minikube/files/etc/test/nested/copy/9076/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (35.17s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2244: (dbg) Run:  out/minikube-linux-amd64 start -p functional-932789 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2244: (dbg) Done: out/minikube-linux-amd64 start -p functional-932789 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (35.169136447s)
--- PASS: TestFunctional/serial/StartWithProxy (35.17s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.01s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1228 06:32:02.831935    9076 config.go:182] Loaded profile config "functional-932789": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-932789 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-932789 --alsologtostderr -v=8: (6.006146407s)
functional_test.go:678: soft start took 6.006859444s for "functional-932789" cluster.
I1228 06:32:08.838476    9076 config.go:182] Loaded profile config "functional-932789": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/SoftStart (6.01s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-932789 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1069: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 cache add registry.k8s.io/pause:3.1
functional_test.go:1069: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 cache add registry.k8s.io/pause:3.3
functional_test.go:1069: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.88s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1097: (dbg) Run:  docker build -t minikube-local-cache-test:functional-932789 /tmp/TestFunctionalserialCacheCmdcacheadd_local3983573623/001
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 cache add minikube-local-cache-test:functional-932789
functional_test.go:1114: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 cache delete minikube-local-cache-test:functional-932789
functional_test.go:1103: (dbg) Run:  docker rmi minikube-local-cache-test:functional-932789
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.88s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1122: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1130: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1144: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1167: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-932789 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (276.130915ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 cache reload
functional_test.go:1183: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 kubectl -- --context functional-932789 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-932789 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (28.68s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-932789 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-932789 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (28.676036705s)
functional_test.go:776: restart took 28.676159511s for "functional-932789" cluster.
I1228 06:32:43.198944    9076 config.go:182] Loaded profile config "functional-932789": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/ExtraConfig (28.68s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-932789 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.93s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1256: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 logs
--- PASS: TestFunctional/serial/LogsCmd (0.93s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.95s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 logs --file /tmp/TestFunctionalserialLogsFileCmd4236425640/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.95s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.59s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2331: (dbg) Run:  kubectl --context functional-932789 apply -f testdata/invalidsvc.yaml
functional_test.go:2345: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-932789
functional_test.go:2345: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-932789: exit status 115 (343.15708ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31061 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2337: (dbg) Run:  kubectl --context functional-932789 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.59s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-932789 config get cpus: exit status 14 (87.862646ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 config set cpus 2
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 config get cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-932789 config get cpus: exit status 14 (63.91597ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (28.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 0 -p functional-932789 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 0 -p functional-932789 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 38844: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (28.31s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:994: (dbg) Run:  out/minikube-linux-amd64 start -p functional-932789 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:994: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-932789 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (180.325183ms)

                                                
                                                
-- stdout --
	* [functional-932789] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22352
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22352-5550/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-5550/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:32:49.408477   37924 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:32:49.408606   37924 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:32:49.408616   37924 out.go:374] Setting ErrFile to fd 2...
	I1228 06:32:49.408622   37924 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:32:49.408836   37924 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:32:49.409362   37924 out.go:368] Setting JSON to false
	I1228 06:32:49.410380   37924 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":921,"bootTime":1766902648,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1228 06:32:49.410461   37924 start.go:143] virtualization: kvm guest
	I1228 06:32:49.413146   37924 out.go:179] * [functional-932789] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1228 06:32:49.414387   37924 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 06:32:49.414386   37924 notify.go:221] Checking for updates...
	I1228 06:32:49.416686   37924 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 06:32:49.417900   37924 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:32:49.418997   37924 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-5550/.minikube
	I1228 06:32:49.420179   37924 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1228 06:32:49.421222   37924 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 06:32:49.422645   37924 config.go:182] Loaded profile config "functional-932789": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:32:49.423314   37924 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 06:32:49.454069   37924 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1228 06:32:49.454305   37924 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:32:49.512672   37924 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-28 06:32:49.501766209 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:32:49.512815   37924 docker.go:319] overlay module found
	I1228 06:32:49.514493   37924 out.go:179] * Using the docker driver based on existing profile
	I1228 06:32:49.515578   37924 start.go:309] selected driver: docker
	I1228 06:32:49.515596   37924 start.go:928] validating driver "docker" against &{Name:functional-932789 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-932789 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:32:49.515720   37924 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 06:32:49.517362   37924 out.go:203] 
	W1228 06:32:49.518665   37924 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1228 06:32:49.519804   37924 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 start -p functional-932789 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1040: (dbg) Run:  out/minikube-linux-amd64 start -p functional-932789 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-932789 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (167.165764ms)

                                                
                                                
-- stdout --
	* [functional-932789] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22352
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22352-5550/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-5550/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:32:49.233902   37796 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:32:49.234169   37796 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:32:49.234179   37796 out.go:374] Setting ErrFile to fd 2...
	I1228 06:32:49.234183   37796 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:32:49.234483   37796 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:32:49.234876   37796 out.go:368] Setting JSON to false
	I1228 06:32:49.236170   37796 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":921,"bootTime":1766902648,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1228 06:32:49.236246   37796 start.go:143] virtualization: kvm guest
	I1228 06:32:49.238118   37796 out.go:179] * [functional-932789] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1228 06:32:49.239347   37796 notify.go:221] Checking for updates...
	I1228 06:32:49.239368   37796 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 06:32:49.240686   37796 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 06:32:49.241787   37796 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:32:49.243042   37796 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-5550/.minikube
	I1228 06:32:49.244134   37796 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1228 06:32:49.245324   37796 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 06:32:49.246978   37796 config.go:182] Loaded profile config "functional-932789": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:32:49.247498   37796 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 06:32:49.274875   37796 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1228 06:32:49.274973   37796 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:32:49.331409   37796 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-28 06:32:49.321556463 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:32:49.331509   37796 docker.go:319] overlay module found
	I1228 06:32:49.333220   37796 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1228 06:32:49.334694   37796 start.go:309] selected driver: docker
	I1228 06:32:49.334725   37796 start.go:928] validating driver "docker" against &{Name:functional-932789 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-932789 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:32:49.334815   37796 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 06:32:49.336555   37796 out.go:203] 
	W1228 06:32:49.337852   37796 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1228 06:32:49.339006   37796 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (16.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1641: (dbg) Run:  kubectl --context functional-932789 create deployment hello-node-connect --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1645: (dbg) Run:  kubectl --context functional-932789 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-5d95464fd4-c8h5z" [e189ac8d-bb28-44da-9c74-3f6f7ad01cd0] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-5d95464fd4-c8h5z" [e189ac8d-bb28-44da-9c74-3f6f7ad01cd0] Running
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 16.003603572s
functional_test.go:1659: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 service hello-node-connect --url
functional_test.go:1665: found endpoint for hello-node-connect: http://192.168.49.2:31805
functional_test.go:1685: http://192.168.49.2:31805: success! body:
Request served by hello-node-connect-5d95464fd4-c8h5z

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31805
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (16.53s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1700: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 addons list
functional_test.go:1712: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (32.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [dee2e91c-9d86-40ff-aa81-3d86ee3685b6] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003890713s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-932789 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-932789 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-932789 get pvc myclaim -o=json
I1228 06:32:54.958635    9076 retry.go:84] will retry after 1.9s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:b87b1c35-ff0e-4ab5-8371-d96a68031ab5 ResourceVersion:545 Generation:0 CreationTimestamp:2025-12-28 06:32:54 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc00196c120 VolumeMode:0xc00196c130 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-932789 get pvc myclaim -o=json
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-932789 get pvc myclaim -o=json
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-932789 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-932789 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [1ecdc4d6-5aed-4802-a7bf-c00794945525] Pending
helpers_test.go:353: "sp-pod" [1ecdc4d6-5aed-4802-a7bf-c00794945525] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [1ecdc4d6-5aed-4802-a7bf-c00794945525] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004489951s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-932789 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-932789 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-932789 apply -f testdata/storage-provisioner/pod.yaml
I1228 06:33:12.477204    9076 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [e7ef81ab-4c93-4b59-93b2-6fc7b29b2d22] Pending
helpers_test.go:353: "sp-pod" [e7ef81ab-4c93-4b59-93b2-6fc7b29b2d22] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [e7ef81ab-4c93-4b59-93b2-6fc7b29b2d22] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.003709601s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-932789 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (32.84s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1735: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 ssh "echo hello"
functional_test.go:1752: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 ssh -n functional-932789 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 cp functional-932789:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1993565576/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 ssh -n functional-932789 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 ssh -n functional-932789 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (20.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1803: (dbg) Run:  kubectl --context functional-932789 replace --force -f testdata/mysql.yaml
functional_test.go:1809: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-p4s5k" [41befcbb-948a-4b56-8000-bf563b7c852c] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-7d7b65bc95-p4s5k" [41befcbb-948a-4b56-8000-bf563b7c852c] Running
functional_test.go:1809: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 15.002719515s
functional_test.go:1817: (dbg) Run:  kubectl --context functional-932789 exec mysql-7d7b65bc95-p4s5k -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-932789 exec mysql-7d7b65bc95-p4s5k -- mysql -ppassword -e "show databases;": exit status 1 (89.285711ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1228 06:33:25.564543    9076 retry.go:84] will retry after 700ms: exit status 1
functional_test.go:1817: (dbg) Run:  kubectl --context functional-932789 exec mysql-7d7b65bc95-p4s5k -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-932789 exec mysql-7d7b65bc95-p4s5k -- mysql -ppassword -e "show databases;": exit status 1 (109.17112ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1817: (dbg) Run:  kubectl --context functional-932789 exec mysql-7d7b65bc95-p4s5k -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-932789 exec mysql-7d7b65bc95-p4s5k -- mysql -ppassword -e "show databases;": exit status 1 (84.416286ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1817: (dbg) Run:  kubectl --context functional-932789 exec mysql-7d7b65bc95-p4s5k -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (20.73s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1939: Checking for existence of /etc/test/nested/copy/9076/hosts within VM
functional_test.go:1941: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 ssh "sudo cat /etc/test/nested/copy/9076/hosts"
functional_test.go:1946: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1982: Checking for existence of /etc/ssl/certs/9076.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 ssh "sudo cat /etc/ssl/certs/9076.pem"
functional_test.go:1982: Checking for existence of /usr/share/ca-certificates/9076.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 ssh "sudo cat /usr/share/ca-certificates/9076.pem"
functional_test.go:1982: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/90762.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 ssh "sudo cat /etc/ssl/certs/90762.pem"
functional_test.go:2009: Checking for existence of /usr/share/ca-certificates/90762.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 ssh "sudo cat /usr/share/ca-certificates/90762.pem"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-932789 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2037: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 ssh "sudo systemctl is-active docker"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-932789 ssh "sudo systemctl is-active docker": exit status 1 (291.504194ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2037: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 ssh "sudo systemctl is-active containerd"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-932789 ssh "sudo systemctl is-active containerd": exit status 1 (289.540757ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2298: (dbg) Run:  out/minikube-linux-amd64 license
2025/12/28 06:33:17 [DEBUG] GET http://127.0.0.1:44061/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/License (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-932789 /tmp/TestFunctionalparallelMountCmdany-port3922315670/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1766903568734430303" to /tmp/TestFunctionalparallelMountCmdany-port3922315670/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1766903568734430303" to /tmp/TestFunctionalparallelMountCmdany-port3922315670/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1766903568734430303" to /tmp/TestFunctionalparallelMountCmdany-port3922315670/001/test-1766903568734430303
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-932789 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (320.998623ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1228 06:32:49.055810    9076 retry.go:84] will retry after 400ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 28 06:32 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 28 06:32 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 28 06:32 test-1766903568734430303
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 ssh cat /mount-9p/test-1766903568734430303
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-932789 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [5a483aed-e2b8-4258-8b56-67ca00c1cd79] Pending
helpers_test.go:353: "busybox-mount" [5a483aed-e2b8-4258-8b56-67ca00c1cd79] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [5a483aed-e2b8-4258-8b56-67ca00c1cd79] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [5a483aed-e2b8-4258-8b56-67ca00c1cd79] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 9.002999769s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-932789 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-932789 /tmp/TestFunctionalparallelMountCmdany-port3922315670/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.74s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-932789 /tmp/TestFunctionalparallelMountCmdspecific-port423722126/001:/mount-9p --alsologtostderr -v=1 --port 39527]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-932789 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (266.197224ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-932789 /tmp/TestFunctionalparallelMountCmdspecific-port423722126/001:/mount-9p --alsologtostderr -v=1 --port 39527] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-932789 ssh "sudo umount -f /mount-9p": exit status 1 (260.58846ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-amd64 -p functional-932789 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-932789 /tmp/TestFunctionalparallelMountCmdspecific-port423722126/001:/mount-9p --alsologtostderr -v=1 --port 39527] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-932789 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2194601350/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-932789 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2194601350/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-932789 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2194601350/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-932789 ssh "findmnt -T" /mount1: exit status 1 (330.712273ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-932789 --kill=true
I1228 06:33:03.632323    9076 detect.go:223] nested VM detected
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-932789 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2194601350/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-932789 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2194601350/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-932789 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2194601350/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-932789 create deployment hello-node --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1460: (dbg) Run:  kubectl --context functional-932789 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-684ffdf98c-z95jv" [756550d6-3efd-4f46-a612-ccef06c89299] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-684ffdf98c-z95jv" [756550d6-3efd-4f46-a612-ccef06c89299] Running
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.005177891s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2129: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2129: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2129: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1474: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 service list
functional_test.go:1474: (dbg) Done: out/minikube-linux-amd64 -p functional-932789 service list: (1.481346528s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1504: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 service list -o json
functional_test.go:1504: (dbg) Done: out/minikube-linux-amd64 -p functional-932789 service list -o json: (1.358873582s)
functional_test.go:1509: Took "1.358985464s" to run "out/minikube-linux-amd64 -p functional-932789 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1524: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 service --namespace=default --https --url hello-node
functional_test.go:1537: found endpoint: https://192.168.49.2:30695
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1574: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 service hello-node --url
functional_test.go:1580: found endpoint for hello-node: http://192.168.49.2:30695
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1295: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1330: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1335: Took "416.832942ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1344: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1349: Took "74.779686ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1381: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1386: Took "434.493125ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1394: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1399: Took "139.07554ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2280: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-932789 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-932789 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-932789 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 44128: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-932789 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-932789 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (6.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-932789 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [6b40116f-4ad8-4760-ac20-011a6799d727] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [6b40116f-4ad8-4760-ac20-011a6799d727] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 6.004429294s
I1228 06:33:24.770927    9076 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (6.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-932789 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-932789
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-932789
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-932789 image ls --format short --alsologtostderr:
I1228 06:33:24.501195   46442 out.go:360] Setting OutFile to fd 1 ...
I1228 06:33:24.501292   46442 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:33:24.501299   46442 out.go:374] Setting ErrFile to fd 2...
I1228 06:33:24.501304   46442 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:33:24.501522   46442 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
I1228 06:33:24.502095   46442 config.go:182] Loaded profile config "functional-932789": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1228 06:33:24.502231   46442 config.go:182] Loaded profile config "functional-932789": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1228 06:33:24.502764   46442 cli_runner.go:164] Run: docker container inspect functional-932789 --format={{.State.Status}}
I1228 06:33:24.523270   46442 ssh_runner.go:195] Run: systemctl --version
I1228 06:33:24.523308   46442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-932789
I1228 06:33:24.542062   46442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/functional-932789/id_rsa Username:docker}
I1228 06:33:24.632264   46442 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-932789 image ls --format table --alsologtostderr:
┌───────────────────────────────────────────────────┬───────────────────────────────────────┬───────────────┬────────┐
│                       IMAGE                       │                  TAG                  │   IMAGE ID    │  SIZE  │
├───────────────────────────────────────────────────┼───────────────────────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/busybox                       │ 1.28.4-glibc                          │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/kube-proxy                        │ v1.35.0                               │ 32652ff1bbe6b │ 72MB   │
│ registry.k8s.io/pause                             │ 3.1                                   │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                             │ 3.10.1                                │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                             │ latest                                │ 350b164e7ae1d │ 247kB  │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ functional-932789                     │ 9056ab77afb8e │ 4.94MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ latest                                │ 9056ab77afb8e │ 4.94MB │
│ localhost/minikube-local-cache-test               │ functional-932789                     │ c8188799c9640 │ 3.33kB │
│ public.ecr.aws/nginx/nginx                        │ alpine                                │ 04da2b0513cd7 │ 55.2MB │
│ registry.k8s.io/kube-apiserver                    │ v1.35.0                               │ 5c6acd67e9cd1 │ 90.8MB │
│ registry.k8s.io/pause                             │ 3.3                                   │ 0184c1613d929 │ 686kB  │
│ docker.io/kindest/kindnetd                        │ v20250512-df8de77b                    │ 409467f978b4a │ 109MB  │
│ docker.io/kindest/kindnetd                        │ v20251212-v0.29.0-alpha-105-g20ccfc88 │ 4921d7a6dffa9 │ 108MB  │
│ gcr.io/k8s-minikube/storage-provisioner           │ v5                                    │ 6e38f40d628db │ 31.5MB │
│ public.ecr.aws/docker/library/mysql               │ 8.4                                   │ 5e3dcc4ab5604 │ 804MB  │
│ registry.k8s.io/etcd                              │ 3.6.6-0                               │ 0a108f7189562 │ 63.6MB │
│ registry.k8s.io/coredns/coredns                   │ v1.13.1                               │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/kube-controller-manager           │ v1.35.0                               │ 2c9a4b058bd7e │ 76.9MB │
│ registry.k8s.io/kube-scheduler                    │ v1.35.0                               │ 550794e3b12ac │ 52.8MB │
└───────────────────────────────────────────────────┴───────────────────────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-932789 image ls --format table --alsologtostderr:
I1228 06:33:24.730515   46557 out.go:360] Setting OutFile to fd 1 ...
I1228 06:33:24.730611   46557 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:33:24.730622   46557 out.go:374] Setting ErrFile to fd 2...
I1228 06:33:24.730628   46557 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:33:24.730871   46557 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
I1228 06:33:24.731526   46557 config.go:182] Loaded profile config "functional-932789": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1228 06:33:24.731644   46557 config.go:182] Loaded profile config "functional-932789": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1228 06:33:24.732244   46557 cli_runner.go:164] Run: docker container inspect functional-932789 --format={{.State.Status}}
I1228 06:33:24.752884   46557 ssh_runner.go:195] Run: systemctl --version
I1228 06:33:24.752939   46557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-932789
I1228 06:33:24.774076   46557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/functional-932789/id_rsa Username:docker}
I1228 06:33:24.868539   46557 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-932789 image ls --format json --alsologtostderr:
[{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-932789","ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest"],"size":"4944818"},{"id":"04da2b0513cd78d
8d29d60575cef80813c5496c15a801921e47efdf0feba39e5","repoDigests":["public.ecr.aws/nginx/nginx@sha256:00e053577693e0ee5f7f8b433cdb249624af188622d0da5df20eef4e25a0881c","public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55157106"},{"id":"550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0ab622491a82532e01876d55e365c08c5bac01bcd5444a8ed58c1127ab47819f","registry.k8s.io/kube-scheduler@sha256:dd2b6a420b171e83748166a66372f43384b3142fc4f6f56a6240a9e152cccd69"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0"],"size":"52763986"},{"id":"4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251","repoDigests":["docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27","docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae"],"repoTags":["docker.io/k
indest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88"],"size":"107598204"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998","gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7","registry.k8s.io/coredns/coredns@sha256
:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79193994"},{"id":"0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2","repoDigests":["registry.k8s.io/etcd@sha256:5279f56db4f32772bb41e47ca44c553f5c87a08fdf339d74c23a4cdc3c388d6a","registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"63582405"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"5e3dcc4ab5604ab9bdf1054833d4f0ac396465de830c
cac42d4f59131db9ba23","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:eaf64e87ae0d1136d46405ad56c9010de509fd5b949b9c8ede45c94f47804d21","public.ecr.aws/docker/library/mysql@sha256:1f5b0aca09cfa06d9a7b89b28d349c1e01ba0d31339a4786fbcb3d5927070130"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"803760263"},{"id":"5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499","repoDigests":["registry.k8s.io/kube-apiserver@sha256:32f98b308862e1cf98c900927d84630fb86a836a480f02752a779eb85c1489f3","registry.k8s.io/kube-apiserver@sha256:50e01ce089b6b6508e2f68ba0da943a3bc4134596e7e2afaac27dd26f71aca7a"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0"],"size":"90844140"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.1
0.1"],"size":"742092"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029","docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"249229937"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"c8188799c964050a8947d4e5b9688e3c168c625f9bdca001aef575d669389c49","repoDigests":["localhost/minikube-local-cache-test@sha256:98fbff52ed87de1088e0407c0fa6783f6d01c9636f0494e26f13da53358eb51f"],"repoTags":["localhost/minikube-local-cache-test:functional-932789"],"s
ize":"3330"},{"id":"2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:3e343fd915d2e214b9a68c045b94017832927edb89aafa471324f8d05a191111","registry.k8s.io/kube-controller-manager@sha256:e0ce4c7d278a001734bbd8020ed1b7e535ae9d2412c700032eb3df190ea91a62"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0"],"size":"76893520"},{"id":"32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8","repoDigests":["registry.k8s.io/kube-proxy@sha256:ad87ae17f92f26144bd5a35fc86a73f2fae6effd1666db51bc03f8e9213de532","registry.k8s.io/kube-proxy@sha256:c818ca1eff765e35348b77e484da915175cdf483f298e1f9885ed706fcbcb34c"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0"],"size":"71986585"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-932789 image ls --format json --alsologtostderr:
I1228 06:33:24.728805   46556 out.go:360] Setting OutFile to fd 1 ...
I1228 06:33:24.729123   46556 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:33:24.729134   46556 out.go:374] Setting ErrFile to fd 2...
I1228 06:33:24.729140   46556 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:33:24.729328   46556 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
I1228 06:33:24.729841   46556 config.go:182] Loaded profile config "functional-932789": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1228 06:33:24.729938   46556 config.go:182] Loaded profile config "functional-932789": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1228 06:33:24.730423   46556 cli_runner.go:164] Run: docker container inspect functional-932789 --format={{.State.Status}}
I1228 06:33:24.750667   46556 ssh_runner.go:195] Run: systemctl --version
I1228 06:33:24.750715   46556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-932789
I1228 06:33:24.770599   46556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/functional-932789/id_rsa Username:docker}
I1228 06:33:24.867651   46556 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-932789 image ls --format yaml --alsologtostderr:
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 5e3dcc4ab5604ab9bdf1054833d4f0ac396465de830ccac42d4f59131db9ba23
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:eaf64e87ae0d1136d46405ad56c9010de509fd5b949b9c8ede45c94f47804d21
- public.ecr.aws/docker/library/mysql@sha256:1f5b0aca09cfa06d9a7b89b28d349c1e01ba0d31339a4786fbcb3d5927070130
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803760263"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251
repoDigests:
- docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27
- docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae
repoTags:
- docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
size: "107598204"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:00e053577693e0ee5f7f8b433cdb249624af188622d0da5df20eef4e25a0881c
- public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55157106"
- id: 550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0ab622491a82532e01876d55e365c08c5bac01bcd5444a8ed58c1127ab47819f
- registry.k8s.io/kube-scheduler@sha256:dd2b6a420b171e83748166a66372f43384b3142fc4f6f56a6240a9e152cccd69
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0
size: "52763986"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "249229937"
- id: 5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:32f98b308862e1cf98c900927d84630fb86a836a480f02752a779eb85c1489f3
- registry.k8s.io/kube-apiserver@sha256:50e01ce089b6b6508e2f68ba0da943a3bc4134596e7e2afaac27dd26f71aca7a
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0
size: "90844140"
- id: 2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:3e343fd915d2e214b9a68c045b94017832927edb89aafa471324f8d05a191111
- registry.k8s.io/kube-controller-manager@sha256:e0ce4c7d278a001734bbd8020ed1b7e535ae9d2412c700032eb3df190ea91a62
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0
size: "76893520"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-932789
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
size: "4944818"
- id: c8188799c964050a8947d4e5b9688e3c168c625f9bdca001aef575d669389c49
repoDigests:
- localhost/minikube-local-cache-test@sha256:98fbff52ed87de1088e0407c0fa6783f6d01c9636f0494e26f13da53358eb51f
repoTags:
- localhost/minikube-local-cache-test:functional-932789
size: "3330"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79193994"
- id: 0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2
repoDigests:
- registry.k8s.io/etcd@sha256:5279f56db4f32772bb41e47ca44c553f5c87a08fdf339d74c23a4cdc3c388d6a
- registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "63582405"
- id: 32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8
repoDigests:
- registry.k8s.io/kube-proxy@sha256:ad87ae17f92f26144bd5a35fc86a73f2fae6effd1666db51bc03f8e9213de532
- registry.k8s.io/kube-proxy@sha256:c818ca1eff765e35348b77e484da915175cdf483f298e1f9885ed706fcbcb34c
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0
size: "71986585"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-932789 image ls --format yaml --alsologtostderr:
I1228 06:33:24.500130   46441 out.go:360] Setting OutFile to fd 1 ...
I1228 06:33:24.500443   46441 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:33:24.500457   46441 out.go:374] Setting ErrFile to fd 2...
I1228 06:33:24.500464   46441 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:33:24.500727   46441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
I1228 06:33:24.501420   46441 config.go:182] Loaded profile config "functional-932789": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1228 06:33:24.501577   46441 config.go:182] Loaded profile config "functional-932789": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1228 06:33:24.502233   46441 cli_runner.go:164] Run: docker container inspect functional-932789 --format={{.State.Status}}
I1228 06:33:24.522637   46441 ssh_runner.go:195] Run: systemctl --version
I1228 06:33:24.522680   46441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-932789
I1228 06:33:24.541224   46441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/functional-932789/id_rsa Username:docker}
I1228 06:33:24.632264   46441 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-932789 ssh pgrep buildkitd: exit status 1 (271.634384ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 image build -t localhost/my-image:functional-932789 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-932789 image build -t localhost/my-image:functional-932789 testdata/build --alsologtostderr: (2.235464509s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-932789 image build -t localhost/my-image:functional-932789 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 79cbcf4bb8a
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-932789
--> 2fd74bc2277
Successfully tagged localhost/my-image:functional-932789
2fd74bc22773f5d26527076c71635585ae81bed967dd3026c9dc2abd1f859078
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-932789 image build -t localhost/my-image:functional-932789 testdata/build --alsologtostderr:
I1228 06:33:25.232217   46807 out.go:360] Setting OutFile to fd 1 ...
I1228 06:33:25.232491   46807 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:33:25.232501   46807 out.go:374] Setting ErrFile to fd 2...
I1228 06:33:25.232505   46807 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:33:25.232687   46807 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
I1228 06:33:25.233227   46807 config.go:182] Loaded profile config "functional-932789": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1228 06:33:25.233960   46807 config.go:182] Loaded profile config "functional-932789": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1228 06:33:25.234429   46807 cli_runner.go:164] Run: docker container inspect functional-932789 --format={{.State.Status}}
I1228 06:33:25.253110   46807 ssh_runner.go:195] Run: systemctl --version
I1228 06:33:25.253164   46807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-932789
I1228 06:33:25.271995   46807 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/functional-932789/id_rsa Username:docker}
I1228 06:33:25.361727   46807 build_images.go:162] Building image from path: /tmp/build.4018066816.tar
I1228 06:33:25.361784   46807 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1228 06:33:25.370326   46807 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4018066816.tar
I1228 06:33:25.374393   46807 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4018066816.tar: stat -c "%s %y" /var/lib/minikube/build/build.4018066816.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4018066816.tar': No such file or directory
I1228 06:33:25.374426   46807 ssh_runner.go:362] scp /tmp/build.4018066816.tar --> /var/lib/minikube/build/build.4018066816.tar (3072 bytes)
I1228 06:33:25.392278   46807 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4018066816
I1228 06:33:25.399748   46807 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4018066816 -xf /var/lib/minikube/build/build.4018066816.tar
I1228 06:33:25.407447   46807 crio.go:315] Building image: /var/lib/minikube/build/build.4018066816
I1228 06:33:25.407500   46807 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-932789 /var/lib/minikube/build/build.4018066816 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1228 06:33:27.387965   46807 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-932789 /var/lib/minikube/build/build.4018066816 --cgroup-manager=cgroupfs: (1.980437659s)
I1228 06:33:27.388061   46807 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4018066816
I1228 06:33:27.396358   46807 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4018066816.tar
I1228 06:33:27.403717   46807 build_images.go:218] Built localhost/my-image:functional-932789 from /tmp/build.4018066816.tar
I1228 06:33:27.403755   46807 build_images.go:134] succeeded building to: functional-932789
I1228 06:33:27.403761   46807 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0 ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-932789
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-932789 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-932789 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-932789
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-932789 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 image save ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-932789 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 image rm ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-932789 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-932789
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-932789 image save --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-932789 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-932789
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-932789 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.110.207.136 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-932789 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-932789
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-932789
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-932789
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (87.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1228 06:34:12.844012    9076 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:34:12.849327    9076 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:34:12.859579    9076 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:34:12.879855    9076 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:34:12.920157    9076 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:34:13.000499    9076 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:34:13.160936    9076 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:34:13.481520    9076 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:34:14.121830    9076 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:34:15.402174    9076 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:34:17.962909    9076 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:34:23.083508    9076 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:34:33.324575    9076 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:34:53.805180    9076 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-931597 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m26.866697256s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (87.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-931597 kubectl -- rollout status deployment/busybox: (2.514033796s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 kubectl -- exec busybox-769dd8b7dd-5v586 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 kubectl -- exec busybox-769dd8b7dd-65wqx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 kubectl -- exec busybox-769dd8b7dd-p876j -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 kubectl -- exec busybox-769dd8b7dd-5v586 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 kubectl -- exec busybox-769dd8b7dd-65wqx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 kubectl -- exec busybox-769dd8b7dd-p876j -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 kubectl -- exec busybox-769dd8b7dd-5v586 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 kubectl -- exec busybox-769dd8b7dd-65wqx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 kubectl -- exec busybox-769dd8b7dd-p876j -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 kubectl -- exec busybox-769dd8b7dd-5v586 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 kubectl -- exec busybox-769dd8b7dd-5v586 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 kubectl -- exec busybox-769dd8b7dd-65wqx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 kubectl -- exec busybox-769dd8b7dd-65wqx -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 kubectl -- exec busybox-769dd8b7dd-p876j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 kubectl -- exec busybox-769dd8b7dd-p876j -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (27.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-931597 node add --alsologtostderr -v 5: (26.535554954s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (27.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-931597 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E1228 06:35:34.766257    9076 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 cp testdata/cp-test.txt ha-931597:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 ssh -n ha-931597 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 cp ha-931597:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2760289579/001/cp-test_ha-931597.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 ssh -n ha-931597 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 cp ha-931597:/home/docker/cp-test.txt ha-931597-m02:/home/docker/cp-test_ha-931597_ha-931597-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 ssh -n ha-931597 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 ssh -n ha-931597-m02 "sudo cat /home/docker/cp-test_ha-931597_ha-931597-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 cp ha-931597:/home/docker/cp-test.txt ha-931597-m03:/home/docker/cp-test_ha-931597_ha-931597-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 ssh -n ha-931597 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 ssh -n ha-931597-m03 "sudo cat /home/docker/cp-test_ha-931597_ha-931597-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 cp ha-931597:/home/docker/cp-test.txt ha-931597-m04:/home/docker/cp-test_ha-931597_ha-931597-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 ssh -n ha-931597 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 ssh -n ha-931597-m04 "sudo cat /home/docker/cp-test_ha-931597_ha-931597-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 cp testdata/cp-test.txt ha-931597-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 ssh -n ha-931597-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 cp ha-931597-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2760289579/001/cp-test_ha-931597-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 ssh -n ha-931597-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 cp ha-931597-m02:/home/docker/cp-test.txt ha-931597:/home/docker/cp-test_ha-931597-m02_ha-931597.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 ssh -n ha-931597-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 ssh -n ha-931597 "sudo cat /home/docker/cp-test_ha-931597-m02_ha-931597.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 cp ha-931597-m02:/home/docker/cp-test.txt ha-931597-m03:/home/docker/cp-test_ha-931597-m02_ha-931597-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 ssh -n ha-931597-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 ssh -n ha-931597-m03 "sudo cat /home/docker/cp-test_ha-931597-m02_ha-931597-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 cp ha-931597-m02:/home/docker/cp-test.txt ha-931597-m04:/home/docker/cp-test_ha-931597-m02_ha-931597-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 ssh -n ha-931597-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 ssh -n ha-931597-m04 "sudo cat /home/docker/cp-test_ha-931597-m02_ha-931597-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 cp testdata/cp-test.txt ha-931597-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 ssh -n ha-931597-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 cp ha-931597-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2760289579/001/cp-test_ha-931597-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 ssh -n ha-931597-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 cp ha-931597-m03:/home/docker/cp-test.txt ha-931597:/home/docker/cp-test_ha-931597-m03_ha-931597.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 ssh -n ha-931597-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 ssh -n ha-931597 "sudo cat /home/docker/cp-test_ha-931597-m03_ha-931597.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 cp ha-931597-m03:/home/docker/cp-test.txt ha-931597-m02:/home/docker/cp-test_ha-931597-m03_ha-931597-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 ssh -n ha-931597-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 ssh -n ha-931597-m02 "sudo cat /home/docker/cp-test_ha-931597-m03_ha-931597-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 cp ha-931597-m03:/home/docker/cp-test.txt ha-931597-m04:/home/docker/cp-test_ha-931597-m03_ha-931597-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 ssh -n ha-931597-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 ssh -n ha-931597-m04 "sudo cat /home/docker/cp-test_ha-931597-m03_ha-931597-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 cp testdata/cp-test.txt ha-931597-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 ssh -n ha-931597-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 cp ha-931597-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2760289579/001/cp-test_ha-931597-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 ssh -n ha-931597-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 cp ha-931597-m04:/home/docker/cp-test.txt ha-931597:/home/docker/cp-test_ha-931597-m04_ha-931597.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 ssh -n ha-931597-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 ssh -n ha-931597 "sudo cat /home/docker/cp-test_ha-931597-m04_ha-931597.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 cp ha-931597-m04:/home/docker/cp-test.txt ha-931597-m02:/home/docker/cp-test_ha-931597-m04_ha-931597-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 ssh -n ha-931597-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 ssh -n ha-931597-m02 "sudo cat /home/docker/cp-test_ha-931597-m04_ha-931597-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 cp ha-931597-m04:/home/docker/cp-test.txt ha-931597-m03:/home/docker/cp-test_ha-931597-m04_ha-931597-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 ssh -n ha-931597-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 ssh -n ha-931597-m03 "sudo cat /home/docker/cp-test_ha-931597-m04_ha-931597-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-931597 node stop m02 --alsologtostderr -v 5: (12.022591601s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-931597 status --alsologtostderr -v 5: exit status 7 (683.709935ms)

                                                
                                                
-- stdout --
	ha-931597
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-931597-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-931597-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-931597-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:36:04.095310   66755 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:36:04.095416   66755 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:36:04.095424   66755 out.go:374] Setting ErrFile to fd 2...
	I1228 06:36:04.095428   66755 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:36:04.095649   66755 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:36:04.095836   66755 out.go:368] Setting JSON to false
	I1228 06:36:04.095864   66755 mustload.go:66] Loading cluster: ha-931597
	I1228 06:36:04.095949   66755 notify.go:221] Checking for updates...
	I1228 06:36:04.096350   66755 config.go:182] Loaded profile config "ha-931597": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:36:04.096376   66755 status.go:174] checking status of ha-931597 ...
	I1228 06:36:04.096858   66755 cli_runner.go:164] Run: docker container inspect ha-931597 --format={{.State.Status}}
	I1228 06:36:04.115236   66755 status.go:371] ha-931597 host status = "Running" (err=<nil>)
	I1228 06:36:04.115256   66755 host.go:66] Checking if "ha-931597" exists ...
	I1228 06:36:04.115524   66755 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-931597
	I1228 06:36:04.132977   66755 host.go:66] Checking if "ha-931597" exists ...
	I1228 06:36:04.133341   66755 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 06:36:04.133392   66755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-931597
	I1228 06:36:04.152020   66755 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/ha-931597/id_rsa Username:docker}
	I1228 06:36:04.240359   66755 ssh_runner.go:195] Run: systemctl --version
	I1228 06:36:04.246501   66755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:36:04.258631   66755 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:36:04.311581   66755 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-28 06:36:04.302173244 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:36:04.312073   66755 kubeconfig.go:125] found "ha-931597" server: "https://192.168.49.254:8443"
	I1228 06:36:04.312106   66755 api_server.go:166] Checking apiserver status ...
	I1228 06:36:04.312149   66755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 06:36:04.324061   66755 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1245/cgroup
	I1228 06:36:04.332269   66755 ssh_runner.go:195] Run: sudo grep ^0:: /proc/1245/cgroup
	I1228 06:36:04.340002   66755 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/system.slice/crio-9850471012bfbb80a280a5779569b998698095e402cc174302a02a6810aded53.scope/container/cgroup.freeze
	I1228 06:36:04.347193   66755 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1228 06:36:04.353396   66755 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1228 06:36:04.353418   66755 status.go:463] ha-931597 apiserver status = Running (err=<nil>)
	I1228 06:36:04.353431   66755 status.go:176] ha-931597 status: &{Name:ha-931597 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1228 06:36:04.353455   66755 status.go:174] checking status of ha-931597-m02 ...
	I1228 06:36:04.353694   66755 cli_runner.go:164] Run: docker container inspect ha-931597-m02 --format={{.State.Status}}
	I1228 06:36:04.371175   66755 status.go:371] ha-931597-m02 host status = "Stopped" (err=<nil>)
	I1228 06:36:04.371206   66755 status.go:384] host is not running, skipping remaining checks
	I1228 06:36:04.371213   66755 status.go:176] ha-931597-m02 status: &{Name:ha-931597-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1228 06:36:04.371235   66755 status.go:174] checking status of ha-931597-m03 ...
	I1228 06:36:04.371474   66755 cli_runner.go:164] Run: docker container inspect ha-931597-m03 --format={{.State.Status}}
	I1228 06:36:04.388904   66755 status.go:371] ha-931597-m03 host status = "Running" (err=<nil>)
	I1228 06:36:04.388926   66755 host.go:66] Checking if "ha-931597-m03" exists ...
	I1228 06:36:04.389222   66755 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-931597-m03
	I1228 06:36:04.406207   66755 host.go:66] Checking if "ha-931597-m03" exists ...
	I1228 06:36:04.406472   66755 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 06:36:04.406516   66755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-931597-m03
	I1228 06:36:04.424654   66755 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/ha-931597-m03/id_rsa Username:docker}
	I1228 06:36:04.512308   66755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:36:04.524567   66755 kubeconfig.go:125] found "ha-931597" server: "https://192.168.49.254:8443"
	I1228 06:36:04.524593   66755 api_server.go:166] Checking apiserver status ...
	I1228 06:36:04.524628   66755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 06:36:04.537168   66755 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1177/cgroup
	I1228 06:36:04.545857   66755 ssh_runner.go:195] Run: sudo grep ^0:: /proc/1177/cgroup
	I1228 06:36:04.553711   66755 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/system.slice/crio-ca69dbcf9240aad107d8f130bf70368cb5dcfa9c5c22d07b7681c31047a6a807.scope/container/cgroup.freeze
	I1228 06:36:04.561437   66755 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1228 06:36:04.565378   66755 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1228 06:36:04.565399   66755 status.go:463] ha-931597-m03 apiserver status = Running (err=<nil>)
	I1228 06:36:04.565412   66755 status.go:176] ha-931597-m03 status: &{Name:ha-931597-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1228 06:36:04.565424   66755 status.go:174] checking status of ha-931597-m04 ...
	I1228 06:36:04.565655   66755 cli_runner.go:164] Run: docker container inspect ha-931597-m04 --format={{.State.Status}}
	I1228 06:36:04.583943   66755 status.go:371] ha-931597-m04 host status = "Running" (err=<nil>)
	I1228 06:36:04.583966   66755 host.go:66] Checking if "ha-931597-m04" exists ...
	I1228 06:36:04.584256   66755 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-931597-m04
	I1228 06:36:04.604777   66755 host.go:66] Checking if "ha-931597-m04" exists ...
	I1228 06:36:04.605024   66755 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 06:36:04.605091   66755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-931597-m04
	I1228 06:36:04.622309   66755 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/ha-931597-m04/id_rsa Username:docker}
	I1228 06:36:04.709369   66755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:36:04.722378   66755 status.go:176] ha-931597-m04 status: &{Name:ha-931597-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (8.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-931597 node start m02 --alsologtostderr -v 5: (7.373352963s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (8.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (95.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-931597 stop --alsologtostderr -v 5: (37.252484743s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 start --wait true --alsologtostderr -v 5
E1228 06:36:56.686990    9076 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:37:48.736077    9076 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/functional-932789/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:37:48.741371    9076 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/functional-932789/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:37:48.751641    9076 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/functional-932789/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:37:48.771904    9076 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/functional-932789/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:37:48.812229    9076 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/functional-932789/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:37:48.893247    9076 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/functional-932789/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:37:49.054206    9076 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/functional-932789/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:37:49.374731    9076 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/functional-932789/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:37:50.015582    9076 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/functional-932789/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-931597 start --wait true --alsologtostderr -v 5: (58.386829192s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (95.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 node delete m03 --alsologtostderr -v 5
E1228 06:37:51.296797    9076 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/functional-932789/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:37:53.857245    9076 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/functional-932789/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:37:58.977926    9076 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/functional-932789/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-931597 node delete m03 --alsologtostderr -v 5: (10.232729955s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 stop --alsologtostderr -v 5
E1228 06:38:09.218089    9076 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/functional-932789/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:38:29.698742    9076 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/functional-932789/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-931597 stop --alsologtostderr -v 5: (36.137301998s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-931597 status --alsologtostderr -v 5: exit status 7 (110.292345ms)

                                                
                                                
-- stdout --
	ha-931597
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-931597-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-931597-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:38:38.303399   80211 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:38:38.303487   80211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:38:38.303491   80211 out.go:374] Setting ErrFile to fd 2...
	I1228 06:38:38.303495   80211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:38:38.303690   80211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:38:38.303845   80211 out.go:368] Setting JSON to false
	I1228 06:38:38.303866   80211 mustload.go:66] Loading cluster: ha-931597
	I1228 06:38:38.303912   80211 notify.go:221] Checking for updates...
	I1228 06:38:38.304270   80211 config.go:182] Loaded profile config "ha-931597": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:38:38.304298   80211 status.go:174] checking status of ha-931597 ...
	I1228 06:38:38.304811   80211 cli_runner.go:164] Run: docker container inspect ha-931597 --format={{.State.Status}}
	I1228 06:38:38.322943   80211 status.go:371] ha-931597 host status = "Stopped" (err=<nil>)
	I1228 06:38:38.322972   80211 status.go:384] host is not running, skipping remaining checks
	I1228 06:38:38.322986   80211 status.go:176] ha-931597 status: &{Name:ha-931597 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1228 06:38:38.323008   80211 status.go:174] checking status of ha-931597-m02 ...
	I1228 06:38:38.323283   80211 cli_runner.go:164] Run: docker container inspect ha-931597-m02 --format={{.State.Status}}
	I1228 06:38:38.340698   80211 status.go:371] ha-931597-m02 host status = "Stopped" (err=<nil>)
	I1228 06:38:38.340717   80211 status.go:384] host is not running, skipping remaining checks
	I1228 06:38:38.340723   80211 status.go:176] ha-931597-m02 status: &{Name:ha-931597-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1228 06:38:38.340742   80211 status.go:174] checking status of ha-931597-m04 ...
	I1228 06:38:38.340967   80211 cli_runner.go:164] Run: docker container inspect ha-931597-m04 --format={{.State.Status}}
	I1228 06:38:38.357772   80211 status.go:371] ha-931597-m04 host status = "Stopped" (err=<nil>)
	I1228 06:38:38.357791   80211 status.go:384] host is not running, skipping remaining checks
	I1228 06:38:38.357796   80211 status.go:176] ha-931597-m04 status: &{Name:ha-931597-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (55.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1228 06:39:10.659318    9076 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/functional-932789/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:39:12.843913    9076 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-931597 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (55.193547523s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (55.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (28.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 node add --control-plane --alsologtostderr -v 5
E1228 06:39:40.527702    9076 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-931597 node add --control-plane --alsologtostderr -v 5: (27.500913892s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-931597 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (28.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.89s)

                                                
                                    
x
+
TestJSONOutput/start/Command (36.47s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-995051 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1228 06:40:32.582600    9076 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/functional-932789/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-995051 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (36.473775841s)
--- PASS: TestJSONOutput/start/Command (36.47s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.04s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-995051 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-995051 --output=json --user=testUser: (12.037096938s)
--- PASS: TestJSONOutput/stop/Command (12.04s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-481248 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-481248 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (72.94772ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c0053672-222a-4c0a-add5-ada688e59edd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-481248] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b115496a-a037-48ea-a4fd-119536906c6e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22352"}}
	{"specversion":"1.0","id":"afff6fea-cdaa-4bbf-83cd-9b810e73a87c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4ec6c15b-c34f-488d-b026-f57233ddfb87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22352-5550/kubeconfig"}}
	{"specversion":"1.0","id":"2089ad4c-b467-4d22-b0d7-566440e6c83e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-5550/.minikube"}}
	{"specversion":"1.0","id":"53cdb298-66f9-4238-bf1d-f12c65bffc98","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e1a07b8a-0fbe-465d-bb2f-e74b79f5effa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"89a5ad11-8da0-4812-aaf6-75f53a0d90e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-481248" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-481248
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (24.03s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-144550 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-144550 --network=: (21.927717511s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-144550" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-144550
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-144550: (2.080880991s)
--- PASS: TestKicCustomNetwork/create_custom_network (24.03s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (19.4s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-883779 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-883779 --network=bridge: (17.403296269s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-883779" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-883779
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-883779: (1.982259749s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (19.40s)

                                                
                                    
x
+
TestKicExistingNetwork (19.92s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1228 06:41:54.426954    9076 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1228 06:41:54.443437    9076 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1228 06:41:54.443515    9076 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1228 06:41:54.443535    9076 cli_runner.go:164] Run: docker network inspect existing-network
W1228 06:41:54.460086    9076 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1228 06:41:54.460113    9076 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1228 06:41:54.460126    9076 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1228 06:41:54.460268    9076 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1228 06:41:54.477354    9076 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-83d3c063481b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ca:56:51:df:60:88} reservation:<nil>}
I1228 06:41:54.477683    9076 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f6ade0}
I1228 06:41:54.477713    9076 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1228 06:41:54.477777    9076 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1228 06:41:54.523559    9076 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-580281 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-580281 --network=existing-network: (17.804537298s)
helpers_test.go:176: Cleaning up "existing-network-580281" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-580281
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-580281: (1.981822013s)
I1228 06:42:14.326696    9076 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (19.92s)

                                                
                                    
x
+
TestKicCustomSubnet (22.73s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-322985 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-322985 --subnet=192.168.60.0/24: (20.634894779s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-322985 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-322985" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-322985
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-322985: (2.077711772s)
--- PASS: TestKicCustomSubnet (22.73s)

                                                
                                    
x
+
TestKicStaticIP (20.61s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-065790 --static-ip=192.168.200.200
E1228 06:42:48.736354    9076 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/functional-932789/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-065790 --static-ip=192.168.200.200: (18.374638039s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-065790 ip
helpers_test.go:176: Cleaning up "static-ip-065790" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-065790
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-065790: (2.094517742s)
--- PASS: TestKicStaticIP (20.61s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (42.06s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-853081 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-853081 --driver=docker  --container-runtime=crio: (16.039016234s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-855009 --driver=docker  --container-runtime=crio
E1228 06:43:16.424210    9076 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/functional-932789/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-855009 --driver=docker  --container-runtime=crio: (20.123398795s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-853081
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-855009
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-855009" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-855009
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p second-855009: (2.319447764s)
helpers_test.go:176: Cleaning up "first-853081" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-853081
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p first-853081: (2.335044549s)
--- PASS: TestMinikubeProfile (42.06s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (4.82s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-975335 --memory=3072 --mount-string /tmp/TestMountStartserial3471111899/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-975335 --memory=3072 --mount-string /tmp/TestMountStartserial3471111899/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (3.819750052s)
--- PASS: TestMountStart/serial/StartWithMountFirst (4.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-975335 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.6s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-992090 --memory=3072 --mount-string /tmp/TestMountStartserial3471111899/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-992090 --memory=3072 --mount-string /tmp/TestMountStartserial3471111899/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.59470569s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-992090 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-975335 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-975335 --alsologtostderr -v=5: (1.70133297s)
--- PASS: TestMountStart/serial/DeleteFirst (1.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-992090 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-992090
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-992090: (1.247558372s)
--- PASS: TestMountStart/serial/Stop (1.25s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.04s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-992090
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-992090: (6.037754825s)
--- PASS: TestMountStart/serial/RestartStopped (7.04s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-992090 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (60.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-271949 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1228 06:44:12.846078    9076 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-271949 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m0.487578492s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271949 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (60.97s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-271949 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-271949 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-271949 -- rollout status deployment/busybox: (2.049882707s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-271949 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-271949 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-271949 -- exec busybox-769dd8b7dd-htxqh -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-271949 -- exec busybox-769dd8b7dd-m5fsr -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-271949 -- exec busybox-769dd8b7dd-htxqh -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-271949 -- exec busybox-769dd8b7dd-m5fsr -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-271949 -- exec busybox-769dd8b7dd-htxqh -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-271949 -- exec busybox-769dd8b7dd-m5fsr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.36s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-271949 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-271949 -- exec busybox-769dd8b7dd-htxqh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-271949 -- exec busybox-769dd8b7dd-htxqh -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-271949 -- exec busybox-769dd8b7dd-m5fsr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-271949 -- exec busybox-769dd8b7dd-m5fsr -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.69s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (23.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-271949 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-271949 -v=5 --alsologtostderr: (23.177851375s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271949 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (23.81s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-271949 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.64s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271949 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271949 cp testdata/cp-test.txt multinode-271949:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271949 ssh -n multinode-271949 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271949 cp multinode-271949:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3022018136/001/cp-test_multinode-271949.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271949 ssh -n multinode-271949 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271949 cp multinode-271949:/home/docker/cp-test.txt multinode-271949-m02:/home/docker/cp-test_multinode-271949_multinode-271949-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271949 ssh -n multinode-271949 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271949 ssh -n multinode-271949-m02 "sudo cat /home/docker/cp-test_multinode-271949_multinode-271949-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271949 cp multinode-271949:/home/docker/cp-test.txt multinode-271949-m03:/home/docker/cp-test_multinode-271949_multinode-271949-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271949 ssh -n multinode-271949 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271949 ssh -n multinode-271949-m03 "sudo cat /home/docker/cp-test_multinode-271949_multinode-271949-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271949 cp testdata/cp-test.txt multinode-271949-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271949 ssh -n multinode-271949-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271949 cp multinode-271949-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3022018136/001/cp-test_multinode-271949-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271949 ssh -n multinode-271949-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271949 cp multinode-271949-m02:/home/docker/cp-test.txt multinode-271949:/home/docker/cp-test_multinode-271949-m02_multinode-271949.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271949 ssh -n multinode-271949-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271949 ssh -n multinode-271949 "sudo cat /home/docker/cp-test_multinode-271949-m02_multinode-271949.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271949 cp multinode-271949-m02:/home/docker/cp-test.txt multinode-271949-m03:/home/docker/cp-test_multinode-271949-m02_multinode-271949-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271949 ssh -n multinode-271949-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271949 ssh -n multinode-271949-m03 "sudo cat /home/docker/cp-test_multinode-271949-m02_multinode-271949-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271949 cp testdata/cp-test.txt multinode-271949-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271949 ssh -n multinode-271949-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271949 cp multinode-271949-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3022018136/001/cp-test_multinode-271949-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271949 ssh -n multinode-271949-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271949 cp multinode-271949-m03:/home/docker/cp-test.txt multinode-271949:/home/docker/cp-test_multinode-271949-m03_multinode-271949.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271949 ssh -n multinode-271949-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271949 ssh -n multinode-271949 "sudo cat /home/docker/cp-test_multinode-271949-m03_multinode-271949.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271949 cp multinode-271949-m03:/home/docker/cp-test.txt multinode-271949-m02:/home/docker/cp-test_multinode-271949-m03_multinode-271949-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271949 ssh -n multinode-271949-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271949 ssh -n multinode-271949-m02 "sudo cat /home/docker/cp-test_multinode-271949-m03_multinode-271949-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.37s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271949 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-271949 node stop m03: (1.257410283s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271949 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-271949 status: exit status 7 (483.970399ms)

                                                
                                                
-- stdout --
	multinode-271949
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-271949-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-271949-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271949 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-271949 status --alsologtostderr: exit status 7 (506.328564ms)

                                                
                                                
-- stdout --
	multinode-271949
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-271949-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-271949-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:45:45.786045  139831 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:45:45.786263  139831 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:45:45.786271  139831 out.go:374] Setting ErrFile to fd 2...
	I1228 06:45:45.786274  139831 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:45:45.786458  139831 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:45:45.786610  139831 out.go:368] Setting JSON to false
	I1228 06:45:45.786631  139831 mustload.go:66] Loading cluster: multinode-271949
	I1228 06:45:45.786716  139831 notify.go:221] Checking for updates...
	I1228 06:45:45.786958  139831 config.go:182] Loaded profile config "multinode-271949": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:45:45.786974  139831 status.go:174] checking status of multinode-271949 ...
	I1228 06:45:45.787397  139831 cli_runner.go:164] Run: docker container inspect multinode-271949 --format={{.State.Status}}
	I1228 06:45:45.807136  139831 status.go:371] multinode-271949 host status = "Running" (err=<nil>)
	I1228 06:45:45.807180  139831 host.go:66] Checking if "multinode-271949" exists ...
	I1228 06:45:45.807484  139831 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-271949
	I1228 06:45:45.825065  139831 host.go:66] Checking if "multinode-271949" exists ...
	I1228 06:45:45.825395  139831 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 06:45:45.825458  139831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-271949
	I1228 06:45:45.845601  139831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/multinode-271949/id_rsa Username:docker}
	I1228 06:45:45.933220  139831 ssh_runner.go:195] Run: systemctl --version
	I1228 06:45:45.939584  139831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:45:45.951397  139831 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:45:46.009569  139831 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-28 06:45:45.999702543 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:45:46.010094  139831 kubeconfig.go:125] found "multinode-271949" server: "https://192.168.67.2:8443"
	I1228 06:45:46.010127  139831 api_server.go:166] Checking apiserver status ...
	I1228 06:45:46.010169  139831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 06:45:46.021600  139831 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1243/cgroup
	I1228 06:45:46.030545  139831 ssh_runner.go:195] Run: sudo grep ^0:: /proc/1243/cgroup
	I1228 06:45:46.038735  139831 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/system.slice/crio-aa599798dfe23ddea34bb9fc9697b634cb66485b1bcdf21ea8b61fd0a5bb573b.scope/container/cgroup.freeze
	I1228 06:45:46.046301  139831 api_server.go:299] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1228 06:45:46.050257  139831 api_server.go:325] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1228 06:45:46.050276  139831 status.go:463] multinode-271949 apiserver status = Running (err=<nil>)
	I1228 06:45:46.050285  139831 status.go:176] multinode-271949 status: &{Name:multinode-271949 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1228 06:45:46.050299  139831 status.go:174] checking status of multinode-271949-m02 ...
	I1228 06:45:46.050511  139831 cli_runner.go:164] Run: docker container inspect multinode-271949-m02 --format={{.State.Status}}
	I1228 06:45:46.068984  139831 status.go:371] multinode-271949-m02 host status = "Running" (err=<nil>)
	I1228 06:45:46.069005  139831 host.go:66] Checking if "multinode-271949-m02" exists ...
	I1228 06:45:46.069277  139831 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-271949-m02
	I1228 06:45:46.086732  139831 host.go:66] Checking if "multinode-271949-m02" exists ...
	I1228 06:45:46.087015  139831 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 06:45:46.087141  139831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-271949-m02
	I1228 06:45:46.105650  139831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/22352-5550/.minikube/machines/multinode-271949-m02/id_rsa Username:docker}
	I1228 06:45:46.193842  139831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:45:46.217467  139831 status.go:176] multinode-271949-m02 status: &{Name:multinode-271949-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1228 06:45:46.217505  139831 status.go:174] checking status of multinode-271949-m03 ...
	I1228 06:45:46.217788  139831 cli_runner.go:164] Run: docker container inspect multinode-271949-m03 --format={{.State.Status}}
	I1228 06:45:46.235555  139831 status.go:371] multinode-271949-m03 host status = "Stopped" (err=<nil>)
	I1228 06:45:46.235575  139831 status.go:384] host is not running, skipping remaining checks
	I1228 06:45:46.235580  139831 status.go:176] multinode-271949-m03 status: &{Name:multinode-271949-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (6.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271949 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-271949 node start m03 -v=5 --alsologtostderr: (6.291900163s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271949 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (6.97s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (69.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-271949
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-271949
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-271949: (25.073861856s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-271949 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-271949 --wait=true -v=5 --alsologtostderr: (44.618454734s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-271949
--- PASS: TestMultiNode/serial/RestartKeepsNodes (69.81s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (6.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271949 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-271949 node delete m03: (5.425128322s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271949 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (6.02s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271949 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-271949 stop: (23.91310117s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271949 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-271949 status: exit status 7 (99.521745ms)

                                                
                                                
-- stdout --
	multinode-271949
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-271949-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271949 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-271949 status --alsologtostderr: exit status 7 (96.352884ms)

                                                
                                                
-- stdout --
	multinode-271949
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-271949-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:47:33.109554  149089 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:47:33.109821  149089 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:47:33.109830  149089 out.go:374] Setting ErrFile to fd 2...
	I1228 06:47:33.109834  149089 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:47:33.110017  149089 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:47:33.110182  149089 out.go:368] Setting JSON to false
	I1228 06:47:33.110204  149089 mustload.go:66] Loading cluster: multinode-271949
	I1228 06:47:33.110340  149089 notify.go:221] Checking for updates...
	I1228 06:47:33.110535  149089 config.go:182] Loaded profile config "multinode-271949": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:47:33.110552  149089 status.go:174] checking status of multinode-271949 ...
	I1228 06:47:33.110947  149089 cli_runner.go:164] Run: docker container inspect multinode-271949 --format={{.State.Status}}
	I1228 06:47:33.132647  149089 status.go:371] multinode-271949 host status = "Stopped" (err=<nil>)
	I1228 06:47:33.132684  149089 status.go:384] host is not running, skipping remaining checks
	I1228 06:47:33.132694  149089 status.go:176] multinode-271949 status: &{Name:multinode-271949 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1228 06:47:33.132744  149089 status.go:174] checking status of multinode-271949-m02 ...
	I1228 06:47:33.133006  149089 cli_runner.go:164] Run: docker container inspect multinode-271949-m02 --format={{.State.Status}}
	I1228 06:47:33.150654  149089 status.go:371] multinode-271949-m02 host status = "Stopped" (err=<nil>)
	I1228 06:47:33.150671  149089 status.go:384] host is not running, skipping remaining checks
	I1228 06:47:33.150676  149089 status.go:176] multinode-271949-m02 status: &{Name:multinode-271949-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.11s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (51.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-271949 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1228 06:47:48.736100    9076 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/functional-932789/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-271949 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (51.023075311s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-271949 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (51.62s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (22.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-271949
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-271949-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-271949-m02 --driver=docker  --container-runtime=crio: exit status 14 (77.91966ms)

                                                
                                                
-- stdout --
	* [multinode-271949-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22352
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22352-5550/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-5550/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-271949-m02' is duplicated with machine name 'multinode-271949-m02' in profile 'multinode-271949'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-271949-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-271949-m03 --driver=docker  --container-runtime=crio: (20.069114119s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-271949
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-271949: exit status 80 (293.344619ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-271949 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-271949-m03 already exists in multinode-271949-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-271949-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-271949-m03: (2.330717714s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (22.83s)

                                                
                                    
x
+
TestScheduledStopUnix (97.07s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-847755 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-847755 --memory=3072 --driver=docker  --container-runtime=crio: (19.573045651s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-847755 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1228 06:49:11.341468  159031 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:49:11.342143  159031 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:49:11.342157  159031 out.go:374] Setting ErrFile to fd 2...
	I1228 06:49:11.342164  159031 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:49:11.342610  159031 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:49:11.343237  159031 out.go:368] Setting JSON to false
	I1228 06:49:11.343381  159031 mustload.go:66] Loading cluster: scheduled-stop-847755
	I1228 06:49:11.343849  159031 config.go:182] Loaded profile config "scheduled-stop-847755": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:49:11.343922  159031 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/scheduled-stop-847755/config.json ...
	I1228 06:49:11.344112  159031 mustload.go:66] Loading cluster: scheduled-stop-847755
	I1228 06:49:11.344211  159031 config.go:182] Loaded profile config "scheduled-stop-847755": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-847755 -n scheduled-stop-847755
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-847755 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1228 06:49:11.727089  159196 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:49:11.727231  159196 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:49:11.727242  159196 out.go:374] Setting ErrFile to fd 2...
	I1228 06:49:11.727248  159196 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:49:11.727436  159196 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:49:11.727674  159196 out.go:368] Setting JSON to false
	I1228 06:49:11.727893  159196 daemonize_unix.go:73] killing process 159082 as it is an old scheduled stop
	I1228 06:49:11.728012  159196 mustload.go:66] Loading cluster: scheduled-stop-847755
	I1228 06:49:11.728346  159196 config.go:182] Loaded profile config "scheduled-stop-847755": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:49:11.728427  159196 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/scheduled-stop-847755/config.json ...
	I1228 06:49:11.728623  159196 mustload.go:66] Loading cluster: scheduled-stop-847755
	I1228 06:49:11.728737  159196 config.go:182] Loaded profile config "scheduled-stop-847755": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1228 06:49:11.732387    9076 retry.go:84] will retry after 0s: open /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/scheduled-stop-847755/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-847755 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1228 06:49:12.844459    9076 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-847755 -n scheduled-stop-847755
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-847755
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-847755 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1228 06:49:37.615572  159903 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:49:37.615817  159903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:49:37.615825  159903 out.go:374] Setting ErrFile to fd 2...
	I1228 06:49:37.615829  159903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:49:37.616043  159903 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:49:37.616273  159903 out.go:368] Setting JSON to false
	I1228 06:49:37.616343  159903 mustload.go:66] Loading cluster: scheduled-stop-847755
	I1228 06:49:37.616651  159903 config.go:182] Loaded profile config "scheduled-stop-847755": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:49:37.616724  159903 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/scheduled-stop-847755/config.json ...
	I1228 06:49:37.616912  159903 mustload.go:66] Loading cluster: scheduled-stop-847755
	I1228 06:49:37.617008  159903 config.go:182] Loaded profile config "scheduled-stop-847755": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-847755
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-847755: exit status 7 (76.422532ms)

                                                
                                                
-- stdout --
	scheduled-stop-847755
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-847755 -n scheduled-stop-847755
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-847755 -n scheduled-stop-847755: exit status 7 (77.234275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-847755" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-847755
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-847755: (6.005332458s)
--- PASS: TestScheduledStopUnix (97.07s)

                                                
                                    
x
+
TestInsufficientStorage (8.68s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-614853 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-614853 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (6.236274911s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f6644415-cf21-426c-8126-00e2667d4f8d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-614853] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c895b416-6279-4660-a13a-d7293d85ebe7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22352"}}
	{"specversion":"1.0","id":"80d45fe5-cae6-4582-a7ac-c251c789bb21","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"55fa914c-6a89-4f3a-a2be-d704a1643531","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22352-5550/kubeconfig"}}
	{"specversion":"1.0","id":"34917a73-9702-43d7-9539-904b83e2ed20","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-5550/.minikube"}}
	{"specversion":"1.0","id":"5f04392e-1ef3-449d-acd4-d365f8350e66","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"53d830e3-6729-4279-b2d4-9f0d7a01023e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"466455ae-e76a-41dd-879b-3c97de0488a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"76faedda-6fb4-4a11-97dc-be25e85e96cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"25e8c684-1b8e-48be-adce-b032f96866c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"bea50324-58b9-4c01-ad69-fc0e032b61d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"bf84ba9e-dd54-455c-ae6c-4bf90f3b9001","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-614853\" primary control-plane node in \"insufficient-storage-614853\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"fede3c2e-85f4-41fe-b46b-3e1e5363d4bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1766884053-22351 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"6b5a70a4-0321-4971-9da2-caf3557e94bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"2f02c0fa-8f84-4cf8-9fd3-92abbeef771f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-614853 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-614853 --output=json --layout=cluster: exit status 7 (284.730034ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-614853","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-614853","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1228 06:50:35.293859  162289 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-614853" does not appear in /home/jenkins/minikube-integration/22352-5550/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-614853 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-614853 --output=json --layout=cluster: exit status 7 (273.096161ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-614853","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-614853","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1228 06:50:35.567648  162400 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-614853" does not appear in /home/jenkins/minikube-integration/22352-5550/kubeconfig
	E1228 06:50:35.578145  162400 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/insufficient-storage-614853/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-614853" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-614853
E1228 06:50:35.887872    9076 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-614853: (1.8885746s)
--- PASS: TestInsufficientStorage (8.68s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (47.35s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.2497968391 start -p running-upgrade-674415 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.2497968391 start -p running-upgrade-674415 --memory=3072 --vm-driver=docker  --container-runtime=crio: (23.385801996s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-674415 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-674415 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (20.902022561s)
helpers_test.go:176: Cleaning up "running-upgrade-674415" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-674415
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-674415: (2.471941383s)
--- PASS: TestRunningBinaryUpgrade (47.35s)

                                                
                                    
x
+
TestKubernetesUpgrade (80.82s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-450365 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-450365 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.249636146s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-450365 --alsologtostderr
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-450365 --alsologtostderr: (1.894476293s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-450365 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-450365 status --format={{.Host}}: exit status 7 (81.160769ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-450365 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-450365 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (47.278253899s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-450365 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-450365 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-450365 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (99.291932ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-450365] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22352
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22352-5550/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-5550/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-450365
	    minikube start -p kubernetes-upgrade-450365 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4503652 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0, by running:
	    
	    minikube start -p kubernetes-upgrade-450365 --kubernetes-version=v1.35.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-450365 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-450365 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.440329211s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-450365" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-450365
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-450365: (2.706313117s)
--- PASS: TestKubernetesUpgrade (80.82s)

                                                
                                    
x
+
TestMissingContainerUpgrade (67.73s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.3102272606 start -p missing-upgrade-937201 --memory=3072 --driver=docker  --container-runtime=crio
E1228 06:52:48.735498    9076 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/functional-932789/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.3102272606 start -p missing-upgrade-937201 --memory=3072 --driver=docker  --container-runtime=crio: (20.002799682s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-937201
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-937201: (4.293405019s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-937201
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-937201 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-937201 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (39.59787748s)
helpers_test.go:176: Cleaning up "missing-upgrade-937201" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-937201
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-937201: (3.349946124s)
--- PASS: TestMissingContainerUpgrade (67.73s)

                                                
                                    
x
+
TestPause/serial/Start (54.43s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-407564 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-407564 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (54.425746534s)
--- PASS: TestPause/serial/Start (54.43s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.77s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (308.62s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.3421981788 start -p stopped-upgrade-416029 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.3421981788 start -p stopped-upgrade-416029 --memory=3072 --vm-driver=docker  --container-runtime=crio: (40.388980355s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.3421981788 -p stopped-upgrade-416029 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.3421981788 -p stopped-upgrade-416029 stop: (2.915714023s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-416029 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-416029 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m25.314708346s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (308.62s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.59s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-407564 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-407564 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (7.572974214s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-606662 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-606662 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (74.138685ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-606662] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22352
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22352-5550/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-5550/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (20.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-606662 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-606662 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (19.986520698s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-606662 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (20.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-606662 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-606662 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (2.851090639s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-606662 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-606662 status -o json: exit status 2 (345.340386ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-606662","Host":"Running","Kubelet":"Stopped","APIServer":"Running","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-606662
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-606662: (2.529782248s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (5.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-606662 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-606662 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (7.008423655s)
--- PASS: TestNoKubernetes/serial/Start (7.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22352-5550/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-606662 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-606662 "sudo systemctl is-active --quiet service kubelet": exit status 1 (277.83309ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (17.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (16.382899204s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (17.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-610916 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-610916 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (165.995235ms)

                                                
                                                
-- stdout --
	* [false-610916] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22352
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22352-5550/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-5550/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:52:39.566913  200273 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:52:39.567016  200273 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:52:39.567037  200273 out.go:374] Setting ErrFile to fd 2...
	I1228 06:52:39.567047  200273 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:52:39.567264  200273 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-5550/.minikube/bin
	I1228 06:52:39.567756  200273 out.go:368] Setting JSON to false
	I1228 06:52:39.569012  200273 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2112,"bootTime":1766902648,"procs":300,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1228 06:52:39.569097  200273 start.go:143] virtualization: kvm guest
	I1228 06:52:39.571319  200273 out.go:179] * [false-610916] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1228 06:52:39.572738  200273 notify.go:221] Checking for updates...
	I1228 06:52:39.572759  200273 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 06:52:39.573948  200273 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 06:52:39.575146  200273 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-5550/kubeconfig
	I1228 06:52:39.576433  200273 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-5550/.minikube
	I1228 06:52:39.577545  200273 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1228 06:52:39.578985  200273 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 06:52:39.580863  200273 config.go:182] Loaded profile config "NoKubernetes-606662": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1228 06:52:39.580999  200273 config.go:182] Loaded profile config "cert-expiration-623987": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1228 06:52:39.581137  200273 config.go:182] Loaded profile config "stopped-upgrade-416029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1228 06:52:39.581257  200273 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 06:52:39.606573  200273 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1228 06:52:39.606650  200273 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:52:39.667203  200273 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-28 06:52:39.656687167 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1228 06:52:39.667352  200273 docker.go:319] overlay module found
	I1228 06:52:39.669330  200273 out.go:179] * Using the docker driver based on user configuration
	I1228 06:52:39.670713  200273 start.go:309] selected driver: docker
	I1228 06:52:39.670738  200273 start.go:928] validating driver "docker" against <nil>
	I1228 06:52:39.670752  200273 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 06:52:39.672606  200273 out.go:203] 
	W1228 06:52:39.673884  200273 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1228 06:52:39.675170  200273 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-610916 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-610916

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-610916

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-610916

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-610916

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-610916

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-610916

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-610916

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-610916

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-610916

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-610916

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-610916"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-610916"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-610916"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-610916

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-610916"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-610916"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-610916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-610916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-610916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-610916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-610916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-610916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-610916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-610916" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-610916"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-610916"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-610916"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-610916"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-610916"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-610916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-610916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-610916" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-610916"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-610916"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-610916"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-610916"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-610916"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 28 Dec 2025 06:51:51 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: cert-expiration-623987
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 28 Dec 2025 06:51:28 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-416029
contexts:
- context:
cluster: cert-expiration-623987
extensions:
- extension:
last-update: Sun, 28 Dec 2025 06:51:51 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-623987
name: cert-expiration-623987
- context:
cluster: stopped-upgrade-416029
user: stopped-upgrade-416029
name: stopped-upgrade-416029
current-context: ""
kind: Config
users:
- name: cert-expiration-623987
user:
client-certificate: /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/cert-expiration-623987/client.crt
client-key: /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/cert-expiration-623987/client.key
- name: stopped-upgrade-416029
user:
client-certificate: /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/stopped-upgrade-416029/client.crt
client-key: /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/stopped-upgrade-416029/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-610916

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-610916"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-610916"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-610916"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-610916"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-610916"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-610916"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-610916"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-610916"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-610916"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-610916"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-610916"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-610916"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-610916"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-610916"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-610916"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-610916"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-610916"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-610916"

                                                
                                                
----------------------- debugLogs end: false-610916 [took: 3.112940759s] --------------------------------
helpers_test.go:176: Cleaning up "false-610916" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-610916
--- PASS: TestNetworkPlugins/group/false (3.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-606662
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-606662: (1.284243467s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-606662 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-606662 --driver=docker  --container-runtime=crio: (6.446015426s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-606662 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-606662 "sudo systemctl is-active --quiet service kubelet": exit status 1 (293.950145ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestPreload/Start-NoPreload-PullImage (61.3s)

                                                
                                                
=== RUN   TestPreload/Start-NoPreload-PullImage
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-785573 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
E1228 06:54:11.784890    9076 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/functional-932789/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:54:12.844010    9076 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/addons-614829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-785573 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (48.501473277s)
preload_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-785573 image pull ghcr.io/medyagh/image-mirrors/busybox:latest
preload_test.go:62: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-785573
preload_test.go:62: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-785573: (12.24456902s)
--- PASS: TestPreload/Start-NoPreload-PullImage (61.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (51.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-694122 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-694122 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (51.490748597s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (51.49s)

                                                
                                    
x
+
TestPreload/Restart-With-Preload-Check-User-Image (46.03s)

                                                
                                                
=== RUN   TestPreload/Restart-With-Preload-Check-User-Image
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-785573 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-785573 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (45.707919861s)
preload_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-785573 image list
--- PASS: TestPreload/Restart-With-Preload-Check-User-Image (46.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (43.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-950460 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-950460 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (43.724501782s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (43.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-694122 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [61d32e53-cd45-4fce-a261-3b03793d8472] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [61d32e53-cd45-4fce-a261-3b03793d8472] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003822324s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-694122 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-694122 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-694122 --alsologtostderr -v=3: (12.22744724s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-694122 -n old-k8s-version-694122
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-694122 -n old-k8s-version-694122: exit status 7 (80.74718ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-694122 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (49.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-694122 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-694122 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (49.235165758s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-694122 -n old-k8s-version-694122
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (49.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-950460 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [d94669e7-4dff-498c-96af-58fd76221f43] Pending
helpers_test.go:353: "busybox" [d94669e7-4dff-498c-96af-58fd76221f43] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [d94669e7-4dff-498c-96af-58fd76221f43] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004088205s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-950460 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (42.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-422591 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-422591 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (42.181692808s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (42.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.15s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-416029
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-416029: (2.149426025s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (37.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-500581 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-500581 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (37.894065492s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (37.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-950460 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-950460 --alsologtostderr -v=3: (13.467789448s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-950460 -n no-preload-950460
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-950460 -n no-preload-950460: exit status 7 (116.816934ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-950460 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (48.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-950460 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-950460 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (48.050327025s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-950460 -n no-preload-950460
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (48.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-422591 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [b72d2a4e-49f9-4dfb-bdc6-5dce7700bbe3] Pending
helpers_test.go:353: "busybox" [b72d2a4e-49f9-4dfb-bdc6-5dce7700bbe3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [b72d2a4e-49f9-4dfb-bdc6-5dce7700bbe3] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003988277s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-422591 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-500581 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [68eee6fa-3951-4c02-bfa6-e8dd801288c4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [68eee6fa-3951-4c02-bfa6-e8dd801288c4] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 7.003939256s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-500581 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-qf9rt" [3117b9db-546f-40fa-8346-edd08efb1341] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004147301s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-qf9rt" [3117b9db-546f-40fa-8346-edd08efb1341] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00320908s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-694122 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-422591 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-422591 --alsologtostderr -v=3: (12.266159156s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-500581 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-500581 --alsologtostderr -v=3: (12.065497191s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-694122 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-422591 -n embed-certs-422591
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-422591 -n embed-certs-422591: exit status 7 (79.381178ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-422591 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (50.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-422591 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-422591 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (50.589710498s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-422591 -n embed-certs-422591
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (50.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (22.58s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-479871 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-479871 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (22.583303677s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (22.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-500581 -n default-k8s-diff-port-500581
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-500581 -n default-k8s-diff-port-500581: exit status 7 (89.688602ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-500581 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-500581 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-500581 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (47.895517515s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-500581 -n default-k8s-diff-port-500581
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-52cwp" [9dbb57b9-4c66-4a57-b957-2158426bacd6] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003886437s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-52cwp" [9dbb57b9-4c66-4a57-b957-2158426bacd6] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005342195s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-950460 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-950460 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-479871 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-479871 --alsologtostderr -v=3: (12.082549574s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (39.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-610916 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-610916 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (39.231530656s)
--- PASS: TestNetworkPlugins/group/auto/Start (39.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-479871 -n newest-cni-479871
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-479871 -n newest-cni-479871: exit status 7 (81.886862ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-479871 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-479871 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-479871 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (10.630627498s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-479871 -n newest-cni-479871
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (10.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-479871 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-cl9z8" [d866b1ea-f358-4b36-9905-c57f743ba21d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003652312s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-h42vt" [e3e01699-6236-4c25-9b57-af036385d6c9] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003346734s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-cl9z8" [d866b1ea-f358-4b36-9905-c57f743ba21d] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00405309s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-500581 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-h42vt" [e3e01699-6236-4c25-9b57-af036385d6c9] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003539509s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-422591 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (44.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-610916 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1228 06:57:48.736102    9076 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/functional-932789/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-610916 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (44.736635291s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (44.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-500581 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-422591 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-610916 "pgrep -a kubelet"
I1228 06:57:58.525491    9076 config.go:182] Loaded profile config "auto-610916": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-610916 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-l62zg" [f9b1da89-8e90-4eeb-a886-4a5666a38454] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-l62zg" [f9b1da89-8e90-4eeb-a886-4a5666a38454] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.004317736s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (49.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-610916 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-610916 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (49.590888099s)
--- PASS: TestNetworkPlugins/group/calico/Start (49.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (41.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-610916 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-610916 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (41.375519536s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (41.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-610916 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-610916 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-610916 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (61.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-610916 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-610916 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m1.061243738s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (61.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-m8sx6" [7351d386-7b11-43fb-bb8c-fc331ab62420] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006097668s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-610916 "pgrep -a kubelet"
I1228 06:58:39.124481    9076 config.go:182] Loaded profile config "kindnet-610916": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-610916 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-prwmn" [60c1f3aa-a4ad-46dd-a71d-a81bf822f750] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-prwmn" [60c1f3aa-a4ad-46dd-a71d-a81bf822f750] Running
I1228 06:58:42.568348    9076 config.go:182] Loaded profile config "custom-flannel-610916": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.003712985s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-610916 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-610916 replace --force -f testdata/netcat-deployment.yaml
I1228 06:58:42.905123    9076 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-qjmrn" [18530f37-7e70-4f77-9baa-f619e73e1c54] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-qjmrn" [18530f37-7e70-4f77-9baa-f619e73e1c54] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.004022707s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-610916 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-610916 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-610916 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-vbnxr" [8912787f-2e4b-4a14-a232-cb0b95ff5a04] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004020325s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-610916 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-610916 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-610916 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-610916 "pgrep -a kubelet"
I1228 06:58:56.690914    9076 config.go:182] Loaded profile config "calico-610916": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (8.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-610916 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-n9dmn" [c170f93f-87b9-492f-b968-9f828fc6d979] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-n9dmn" [c170f93f-87b9-492f-b968-9f828fc6d979] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 8.004544294s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (8.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-610916 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-610916 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-610916 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (44.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-610916 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-610916 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (44.135319037s)
--- PASS: TestNetworkPlugins/group/flannel/Start (44.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (66.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-610916 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-610916 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m6.068757174s)
--- PASS: TestNetworkPlugins/group/bridge/Start (66.07s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs (3.87s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs
preload_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-gcs-822442 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
preload_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-dl-gcs-822442 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio: (3.548712401s)
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-822442" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-gcs-822442
I1228 06:59:30.542347    9076 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
--- PASS: TestPreload/PreloadSrc/gcs (3.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-610916 "pgrep -a kubelet"
I1228 06:59:29.981789    9076 config.go:182] Loaded profile config "enable-default-cni-610916": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-610916 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-vsbxw" [1603ed10-7ac7-4488-a7a5-ddd2417c3430] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-vsbxw" [1603ed10-7ac7-4488-a7a5-ddd2417c3430] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003879593s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.58s)

                                                
                                    
x
+
TestPreload/PreloadSrc/github (4.32s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/github
preload_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-github-013961 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
preload_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-dl-github-013961 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio: (4.11462844s)
helpers_test.go:176: Cleaning up "test-preload-dl-github-013961" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-github-013961
--- PASS: TestPreload/PreloadSrc/github (4.32s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs-cached (0.58s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs-cached
preload_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-gcs-cached-177337 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-cached-177337" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-gcs-cached-177337
--- PASS: TestPreload/PreloadSrc/gcs-cached (0.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-610916 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-610916 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-610916 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-v6hhs" [cf2fa770-4524-430f-a955-add1bdf5736a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004477624s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-610916 "pgrep -a kubelet"
I1228 06:59:58.325250    9076 config.go:182] Loaded profile config "flannel-610916": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-610916 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-h6jkh" [b02bf82c-df29-401c-ba9c-6ee5a7f44d90] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-h6jkh" [b02bf82c-df29-401c-ba9c-6ee5a7f44d90] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.004321976s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-610916 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-610916 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-610916 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-610916 "pgrep -a kubelet"
E1228 07:00:20.185929    9076 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/old-k8s-version-694122/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1228 07:00:20.218769    9076 config.go:182] Loaded profile config "bridge-610916": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-610916 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-tw96n" [fa4cc040-14a3-4e33-80d7-504be59cc0cf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1228 07:00:22.746617    9076 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/old-k8s-version-694122/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-5dd4ccdc4b-tw96n" [fa4cc040-14a3-4e33-80d7-504be59cc0cf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003647899s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-610916 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-610916 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-610916 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.08s)

                                                
                                    

Test skip (27/332)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:765: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-719168" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-719168
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-610916 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-610916

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-610916

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-610916

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-610916

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-610916

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-610916

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-610916

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-610916

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-610916

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-610916

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-610916"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-610916"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-610916"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-610916

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-610916"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-610916"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-610916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-610916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-610916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-610916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-610916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-610916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-610916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-610916" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-610916"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-610916"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-610916"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-610916"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-610916"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-610916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-610916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-610916" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-610916"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-610916"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-610916"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-610916"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-610916"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 28 Dec 2025 06:51:51 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: cert-expiration-623987
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 28 Dec 2025 06:51:28 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-416029
contexts:
- context:
cluster: cert-expiration-623987
extensions:
- extension:
last-update: Sun, 28 Dec 2025 06:51:51 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-623987
name: cert-expiration-623987
- context:
cluster: stopped-upgrade-416029
user: stopped-upgrade-416029
name: stopped-upgrade-416029
current-context: ""
kind: Config
users:
- name: cert-expiration-623987
user:
client-certificate: /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/cert-expiration-623987/client.crt
client-key: /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/cert-expiration-623987/client.key
- name: stopped-upgrade-416029
user:
client-certificate: /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/stopped-upgrade-416029/client.crt
client-key: /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/stopped-upgrade-416029/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-610916

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-610916"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-610916"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-610916"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-610916"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-610916"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-610916"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-610916"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-610916"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-610916"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-610916"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-610916"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-610916"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-610916"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-610916"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-610916"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-610916"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-610916"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-610916"

                                                
                                                
----------------------- debugLogs end: kubenet-610916 [took: 3.187051712s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-610916" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-610916
--- SKIP: TestNetworkPlugins/group/kubenet (3.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-610916 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-610916

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-610916

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-610916

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-610916

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-610916

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-610916

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-610916

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-610916

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-610916

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-610916

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-610916"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-610916"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-610916"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-610916

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-610916"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-610916"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-610916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-610916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-610916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-610916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-610916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-610916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-610916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-610916" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-610916"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-610916"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-610916"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-610916"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-610916"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-610916

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-610916

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-610916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-610916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-610916

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-610916

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-610916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-610916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-610916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-610916" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-610916" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-610916"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-610916"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-610916"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-610916"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-610916"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 28 Dec 2025 06:51:51 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: cert-expiration-623987
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22352-5550/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 28 Dec 2025 06:51:28 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-416029
contexts:
- context:
cluster: cert-expiration-623987
extensions:
- extension:
last-update: Sun, 28 Dec 2025 06:51:51 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-623987
name: cert-expiration-623987
- context:
cluster: stopped-upgrade-416029
user: stopped-upgrade-416029
name: stopped-upgrade-416029
current-context: ""
kind: Config
users:
- name: cert-expiration-623987
user:
client-certificate: /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/cert-expiration-623987/client.crt
client-key: /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/cert-expiration-623987/client.key
- name: stopped-upgrade-416029
user:
client-certificate: /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/stopped-upgrade-416029/client.crt
client-key: /home/jenkins/minikube-integration/22352-5550/.minikube/profiles/stopped-upgrade-416029/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-610916

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-610916"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-610916"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-610916"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-610916"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-610916"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-610916"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-610916"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-610916"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-610916"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-610916"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-610916"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-610916"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-610916"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-610916"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-610916"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-610916"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-610916"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-610916" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-610916"

                                                
                                                
----------------------- debugLogs end: cilium-610916 [took: 3.29645943s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-610916" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-610916
--- SKIP: TestNetworkPlugins/group/cilium (3.45s)

                                                
                                    
Copied to clipboard